title | date | lastmod |
---|---|---|
Virtual Memory |
2022-11-08 |
2022-11-21 |
In memory organisation, we assumed that for each program, its entirety has to be loaded into the memory. This means that the overall program size must be restricted to the size of physical memory.
[!Aim: size of virtual memory limited by the address scheme of the computer and not the actual size of physical memory]
A process can be swapped temporarily out of memory into a backing store.
- Backing store: fast disk large enough to accommodate copies of all memory images for all users
Swapping time is primarily contributed by transfer time.
To support implementation of virtual memory, demand paging is used. Each process is divided into pages and each page can be loaded when it is in demand. Initially, all pages are not in memory.
Consider what occurs if a process does not have “enough” frames—that is, it does not have the minimum number of frames it needs to support pages in the working set. The process will quickly page-fault. At this point, it must replace some page. However, since all its pages are in active use, it must replace a page that will be needed again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it must bring back in immediately. This high paging activity is thrashing.
To prevent thrashing, we must provide a process with as many frames as it needs.
Based on the concept of locality of reference: processes tend to refer to pages in a localised manner
Temporal locality: locations referred to recently are likely to be referenced again
Spatial locality: code and data are usually clustered physically
Implementation:
Additional metrics:
- Establish an upper and lower bound for acceptable page-fault rate and allocate and deallocate frames accordingly
If there are no empty frames, OS needs to locate a victim to evict:
[!Inclusion Property] Pages loaded in n frames is always a subset of pages in n+1 frames
An algorithm does not suffer from Belady's anomaly if it satisfies the inclusion property. This is because such an algorithm will only increase its total coverage of available frames and does not replace any frame that was previously loaded.
An example with FIFO:
Give a process a fixed number of frames in memory for execution.
Allow the number of frames allocated to the process to vary across its execution lifetime.
Process selects a replacement frame from the set of all frames; one process can take a frame from another process. Implication: performance of the process depends on external processes.
Each process selects only from its own set of allocated frame. Implication: may hinder other processes by not making available its less used pages/frames
Specific choice in data structures used and program structure can affect the performance of demand paging.
- Use of data structures like a hash set or linked list might offer lesser locality of reference than a data structure like an array, possibly reducing performance
a. FIFO will replace the earliest loaded page. Page 2
b. Replace the first page with R=0. Page 0.
c. Replace the oldest accessed page. Page 1.
- Increasing page size will result in fewer page faults if data is accessed sequentially
- If data access is random, more paging actions could occur because fewer pages can be kept in memory