### Lecture 08 - Memory Management
Memory within computer system is organised within a hierarchy, where master memory is more expensive and therefore scares
![[Pasted image 20240218145742.png]]
Task of OS is to supply the processes with the "working memory" or main memory resource. Memory manager is software component that is responsible for supplying the processes, which besides used memory manages free memory areas.
**Locality principle**
**Temporal** - data or code areas that are currently being used will most likely be needed again immediately, which makes sense to keep it ready for next access
**Spartial** - upcoming data or code access that needs to be provided is very likely to be close to the previous data or coded accesses, so loading neighbouring data into faster memory would increase speed
| Temporal principle | Spatial principle |
| ---- | ---- |
| ![[Pasted image 20240218150248.png]] | ![[Pasted image 20240218150254.png]] |
**Address and address spaces**
Main memory is divided in logical addressable storage location usually byte-by-byte (8-bits) with 32-bit addresses can map $2^{32}$ addressable units. Address space is set of all addressable addresses which is from $0$ to $2^{32} - 1$
Occupancy of address space can be determined by address space allocation plan, which is determined by OS. The are for application programs and application data is organised by the compiler/interpreter or the runtime system, depending on programming language several variants are possible like:
![[Pasted image 20240218151548.png|inlR|500]]
**Memory Management Mechanisms** - as computer systems has been developing the various mechanisms has been changing, as historically there are:
- **Memory management in monoprogramming** - is the simples form of memory management where only one program runs at a time in memory
![[Pasted image 20240218151812.png]]
- **Storage management with fixed partition** - is division of memory into solid partitions, it helps with multi-programming and improving CPU utilisation, jobs are entered into queue either in one global queue or one queue for each partition
![[Pasted image 20240218151936.png]]
- **Memory management with swapping** - basic idea is timesharing since not all processes always fit in the main memory, and processes are loaded as whole. After certain time the process is swapped out again to secondary storage. Memory holes that can arise are eliminated by combining adjacent memory areas but this is time-consuming. The main difference to fixed partition is that the number of processes the size of the memory area used for process and the storage location vary dynamically, since process is always loaded where there is enough space
![[Pasted image 20240218152216.png]]
#### Basic principles of virtual memory
Motivation for virtual memory comes from problem that memory size of a program including data and stack may exceed the existing physical main memory, same can happen for process if it exist partially in main memory. And for technical usage for software engineers it is easier to deal with one linear address space, since OS keeps the currently used parts in main memory and the rest on the hard drive.
![[Pasted image 20240218152929.png|inlR|400]]
**Shared memory**
![[Pasted image 20240218152949.png|inlR|400]]
In virtual memory the memory manages uses a portion of memory space from a mass storage device to give an illusion that it is part of the main memory. This makes it possible to run programs larger then the available memory. Memory is managed using few strategies like:
- Retrieval strategy - fetch policy which is demanding on paging and prepaging
- Memory allocation strategy - placement policy
- Exchange strategy - replacement policy
- Cleanup strategy - leaning policy
**Paging** is migration between main memory and disk, each process is allowed to use all addresses that are possible due to the computers hardware architecture, regardless of the real size of the main memory. For example on system with 32-bit addresses each process can use a 4GB address space, this also applies if the main memory only has a few MB of real memory, however this has its limits if the system does not work exclusively with Paging it should be very busy.
Memory Management Unit keeps virtual addresses that are obtained by CPU then MMU sends real addresses to main memory
![[Pasted image 20240218155634.png]]
![[Pasted image 20240218155643.png|inlR|400]]
Mapping of virtual address to real address space:
![[Pasted image 20240218155716.png|inlR|400]]
A process's memory map consist of memory pages and memory page is segment of given size 4 kB. Only memory pages that are actually needed need to be loaded in memory while the process is running. The size of the pages is usually small but the number is large.
**Address translation** - virtual address is divided into the virtual page number which is index to the page table and offset. The associated entry in the page table is found via virtual page number. The entry contains the frame number if the page is assigned to a frame. So
f(page-number) - frame number // if page is in memory otherwise page fault
![[Pasted image 20240218160354.png|inlR|500]]
Inserting page table entry depends heavily on the system here is an example:
![[Pasted image 20240218160530.png|inlR|400]]
![[Pasted image 20240218160542.png|inlR|600]]
**Example addressing**:
- Command: MOVE R1, 8196 // with R1 = CPU register
- Address is in Page 2 of the virtual address space
- Address is sent to the MMU
- MMU transforms 8194 to 4
- In HS address 4 is addressed in the lowest frame
In case addressed address is not in HS the MMU causes an error Trap in CPU into OS page fault, where page replacement strategy is necessary.
Mapping needs to be performed fast, very large page tables are possible in address spaces for example 32 bit address space result in 1 milion entries in the page table with a page size of 4kB, with 4bytes per entry 4MB of main memory is required. Each process needs its own page table, therefore memory savings through multi-level page meaning that not all page tables have to be kept in memory
**Multi-level address translation** - for example i1 = 10bits, i2 = 10bits, d = 12bits
![[Pasted image 20240218161149.png|inlR|500]]
![[Pasted image 20240218161158.png|inlR|500]]
#### Optimisation of virtual memory
The virtual addressing is relatively complex, many extensive tables are required per process, some part of hard drive is used as paging area, and it must be continuously examined whether pages remain resident in main memory or should be swapped out to disk. Despite the overhead, virtual addressing is the most commonly used method today, where optimisation is necessary for larger pages in 64bit architectures, plus other options.
Address translation buffer (or Translation Lookaside Buffer TLB) is used for fast translation, it also allocates virtual to real address for the currently most frequently required addresses, hense all memory translations are first forwarded to TLB if address is found no access to page table is required therefore we are saving on main memory resulting in significant performance optimisation. TLB is also part of the MMU
![[Pasted image 20240218162338.png]]
TLB entry contains virtual page number, reference to the page frame in main memory and tag for address space identification like process ID. Tagged TLB is needed since virtual address alone is not unique in the OS, there may not be needed if TLB is completely deleted when the context changes (TLB flush)
![[Pasted image 20240218162532.png]]
Optimisation can also be achieved by using inverted page tables, since on 64-bit processors virtual memory is much larger than real memory, we would need 6 page table levels for immense computing efforts. The idea is to create table in which you map real address to virtual ones that is inverted, and the entry per frame is on inverted page table. Advantages are that significantly fewer table entries exist, only as many as you have available page frames in HS. Disadvantage is that there is no order according to virtual addresses in the page table and searching is a bit more complex because positioning cannot be done via page table index but the combination with TLB is common
![[Pasted image 20240218162821.png]]
Virtual memory advantages: Summary
Processes must not completely reside in memory to be able to expire. Linear memory addressing and no fragmentation from the programmer's perspective. When changing processes CPU keeps the process main memory resident pages, so it only loses those pages when they are displaced by the management of the real memory. User program can make use of full virtual address space if there is enough hard disk space. The actual real storage space allocations changes dynamically according to support the demand. Memory protection mechanisms are easy to implement
#### Page replacement and displacement
Following the scenario when new process requires memory and enough space in main memory. Executable files are usually not stored on the paging area
![[Pasted image 20240218182435.png]]
Page request is send but there is not enough space in main memory and replacement is necessary
![[Pasted image 20240218182547.png]]
**Page fault** - is an pace access error when a frame must be found for the page to be stored, the OS may have selected a page to be removed from memory to make room. The most optimal would be to determine future page accesses in advance so that we can keep them loaded ahead.
**Belady's optimal page replacement** algorithm aims to minimise the number of page faults by selecting for replacement the pages that will not be used for the longest time in the future. The idea is to predict which pages will be needed farthest into the future and replace those that are not.
There is simple example with 3 frames, 6 replacements after the first occupancy
![[Pasted image 20240218183157.png]]
Example:
Access order: 0-1-2-3-4-0-1-5-6-0-1
| access | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 5 | 6 | 0 | 1 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| RAM | - | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| RAM | - | - | 2 | 3 | 4 | 4 | 4 | 5 | 6 | 6 | 6 |
| P.A. | | | | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
| P.A. | | | | | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| P.A. | | | | | | | | 4 | 4 | 4 | 4 |
| P.A. | | | | | | | | | 5 | 5 | 5 |
RAM = Real Memory; PA = Paging Area (x)=Page replacement necessary
**Demand paging**
page replacement algorithm is strategy for selecting the page to be displaced that is set in present. Possible needs-base strategies (**Demand paging**):
- FIFO
- Not Recently Used (NRU)
- Second chance, clock page (FIFO improvements)
- Least Recently Used (LRU)
- Not Frequently Used (NFU)
Short-term statistics is required to store in the page table entries.
Entries in page table looks like
R - is Reference bit if page has been accessed
M - is Modifying bit: modifying access to the page occurs - or a dirty bit sometimes called D-bit
![[Pasted image 20240218184609.png]]
**FIFO** - is strategy aimed to make oldest page replaced, it is quite easy to implement with low overhead and it is used in specific OSs, FIFO needs list of all page table entries, in entering order. Only disadvantage is potentially throws important pages out of main memory. R bit is not necessary only M and Frame number. FIFO list must be managed and no relocation is necessary
![[Pasted image 20240218185016.png|inlR|500]]
Example of FIFO:
Access order: 0-1-2-3-4-0-1-5-6-0-1
| access | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 5 | 6 | 0 | 1 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 0 | 0 | 0 | (3) | 3 | 3 | (1) | 1 | 1 | (0) | 0 |
| RAM | - | 1 | 1 | 1 | (4) | 4 | 4 | (5) | 5 | 5 | (1) |
| RAM | - | - | 2 | 2 | 2 | (0) | 0 | 0 | (6) | 6 | 6 |
| P.A. | | | | 0 | 0 | 2 | 2 | 2 | 2 | 2 | 2 |
| P.A. | | | | | 1 | 1 | 3 | 3 | 3 | 3 | 3 |
| P.A. | | | | | | | | 4 | 4 | 4 | 4 |
| P.A. | | | | | | | | | 0 | 1 | 5 |
**NRU** - Not recently used page replacing algorithm strategy, where pages that have not been used recently are candidates for displacement. It is also easy to implement with R and M bits, but it results in average performance. The R bit is periodically reset by the kernel, while the M bit is not. Resetting the R bit helps track recent usage.
**Victim Search Classes** - The algorithm categorizes pages into four classes for "victim search," with the following priorities:
1. Class 1 (R = 0, M = 0): Pages that have neither been recently read nor modified are swapped out first.
2. Class 2 (R = 0, M = 1): Pages that have been modified but not yet written back.
3. Class 3 (R = 1, M = 0): Pages with only read access in the current interval.
4. Class 4 (R = 1, M = 1): Pages that have been recently read and modified are paged out last.
**Second Chance** - is another algorithm strategy for page replacement, which is further improvement of FIFO, where R (reference) bit is also inspected. It tries to spare the oldest page if it is already used but will be added to the end of the list. If all pages have already been referenced the selection of the page to be replaced corresponds to the FIFO algorithm. The storage is not the same since is uses prepaging
**Clock Page** - is implementation improvment of Second Chance algorithm for strategy for page replacement, since pages are managed in circulating list like a clock, in the event of a page error the page to which the clock hand (pointer) currently refers is always examined, and page table entry is not repositioned
![[Pasted image 20240218190130.png|inlR|300]]
Clock page algorithm for page fault is as following:
- Page pointed to by pointer is swapped out if R bit is 0
- If R = 1 then R is set to 0 and the pointer is moved to the next page
- This continues until a page with R = 0 is found
![[Pasted image 20240218191446.png]]
If $ bit is set for all pages the second chance algorithm degenerates into the FIFO algorithm
**LRU** - Least Recently Used algorithm, replace the page that its last time used is the biggest, this algorithm keeps track of time since the page has been used. Gives good results but, procedure is complex to realise, for exmaple linke list with the most historically used pages at the beginning are sorted in decending order, and it update the list each access to memory. It has it own hardware MMU which is used for calculations but it is rare.
![[Pasted image 20240218191854.png|inlR|300]]![[Pasted image 20240218191906.png|inlR|300]]![[Pasted image 20240218191921.png|inlR|300]]
Example of LRU:
Access order: 0-1-2-3-4-0-1-5-6-0-1
After LRU 8 replacements in the case like FIFO
| access | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 5 | 6 | 0 | 1 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 0 | 0 | 0 | (3) | 3 | 3 | (1) | 1 | 1 | (0) | 0 |
| RAM | - | 1 | 1 | 1 | (4) | 4 | 4 | (5) | 5 | 5 | (1) |
| RAM | - | - | 2 | 2 | 2 | (0) | 0 | 0 | (6) | 6 | 6 |
| P.A. | | | | 0 | 0 | 2 | 2 | 2 | 2 | 2 | 2 |
| P.A. | | | | | 1 | 1 | 3 | 3 | 3 | 3 | 3 |
| P.A. | | | | | | | | 4 | 4 | 4 | 4 |
| P.A. | | | | | | | | | 0 | 1 | 5 |
**NFU** - Not Frequently Used algorithm, that offers a good approximation in comparison to LRU. It is procedure driven. It replaces pages at certain time interval are rarely used, it initializes an access counter on entry in the page table which starts at 1, the counter is incremented when used R-bit = 1. If the page access error occurs the page with the smallest value in the counter is selected for replacement, random if not clearly possible in practice rather FCFS. Problem is that even old frequently accessed pages that are no longer used are not swapped out. Improvement is to take aging into account and can have perfomances very close to LRU.
NFU without aging
![[Pasted image 20240218193105.png|inlR|400]]
Access order: 0-1-2-3-4-0-1-5-6-0-1
According to NFU - 4 and 6 replacements for all possible. Processes are equal to partially random sacrifices 6 replacements when faliling back to FIFO instead of random if the same counterbut geenrally better then FIFO
Annotation: Bat at 1-1-1-1-2-3-4-5-3-4-5-6.... (1 is preferred and aging is required)
|access | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 5 | 6 | 0 | 1 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | $0_1$ | $0_1$ | $0_1$ | ($3_1$) | $3_1$ | $3_1$ | ($1_1$) | $1_1$ | $1_1$ | $1_1$ | $1_2$ |
| RAM | - | $1_1$ | $1_1$ | $1_1$ | ($4_1$) | $4_1$ | $4_1$ | ($5_1$) | ($6_1$) | $6_1$ | $6_1$ |
| RAM | - | - | $2_1$ | $2_1$ | $2_1$ | ($0_1$) | $0_1$ | $0_1$ | $0_1$ | $0_2$ | $0_2$ |
| P.A. | | | | 0 | 0 | 2 | 2 | 2 | 2 | 2 | 2 |
| P.A. | | | | | 1 | 1 | 3 | 3 | 3 | 3 | 3 |
| P.A. | | | | | | | | 4 | 4 | 4 | 4 |
| P.A. | | | | | | | | | 0 | 1 | 5 |
#### Exercise Comparison OPT - FIFO - LRU -NFU
Assumption: time of storage = time of first use
Access order: 2-3-2-1-5-2-4-5-3-2-5-2
After OPT 3 page faults
| access | 2 | 3 | 2 | 1 | 5 | 2 | 4 | 5 | 3 | 2 | 5 | 2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 2 | 2 | 2 | 2 | 2 | 5 | 5 | 5 | 5 | 5 | 5 | 5 |
| RAM | | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | (2) | 2 | 2 |
| RAM | | | | 1 | (5) | 2 | (4) | 4 | 4 | 3 | 3 | 3 |
| P.A. | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| P.A. | | | | | | | 2 | 2 | 2 | 4 | 4 | 4 |
| P.A. | | | | | | | | | | | | |
Access order: 2-3-2-1-5-2-4-5-3-2-5-2
After FIFO 6 page faults
| access | 2 | 3 | 2 | 1 | 5 | 2 | 4 | 5 | 3 | 2 | 5 | 2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 2 | 2 | 2 | 2 | (5) | 5 | 5 | 5 | (3) | 3 | 3 | 3 |
| RAM | | 3 | 3 | 3 | 3 | (2) | 2 | 2 | 2 | 2 | (5) | 5 |
| RAM | | | | 1 | 1 | 1 | (4) | 4 | 4 | 4 | 4 | (2) |
| P.A. | | | | | 2 | 3 | 3 | 3 | 5 | 5 | 2 | 4 |
| P.A. | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 |
| P.A. | | | | | | | | | | | | |
Access order: 2-3-2-1-5-2-4-5-3-2-5-2
After LRU 4 page faults (page that has not been used for the longest time is swapped out)
| access | 2 | 3 | 2 | 1 | 5 | 2 | 4 | 5 | 3 | 2 | 5 | 2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | (3) | 3 | 3 | 3 |
| RAM | | 3 | 3 | 3 | (5) | 5 | 5 | 5 | 5 | 5 | 5 | 5 |
| RAM | | | | 1 | 1 | 1 | (4) | 4 | 4 | (2) | 2 | 2 |
| P.A. | | | | | 3 | 3 | 3 | 3 | 2 | 4 | 4 | 4 |
| P.A. | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 |
| P.A. | | | | | | | | | | | | |
Access order: 2-3-2-1-5-2-4-5-3-2-5-2
After NFU 3 page faults
| access | 2 | 3 | 2 | 1 | 5 | 2 | 4 | 5 | 3 | 2 | 5 | 2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| RAM | $2_1$ | $2_1$ | $2_2$ | $2_2$ | $2_2$ | $2_3$ | $2_3$ | $2_3$ | $2_3$ | $2_4$ | $2_4$ | $2_4$ |
| RAM | | $3_1$ | $3_1$ | $3_1$ | ($5_1$) | $5_1$ | $5_1$ | $5_2$ | $5_2$ | $5_2$ | $5_3$ | $5_3$ |
| RAM | | | | $1_1$ | $1_1$ | $1_1$ | ($4_1$) | $4_1$ | ($3_1$) | $3_1$ | $3_1$ | $3_1$ |
| P.A. | | | | | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 |
| P.A. | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 |
| P.A. | | | | | | | | | | | | |
Prepaging, working set, are method discussed so far Demand Paging, if you use the locality of software programs, you can also make sense of Prepaging, meaning read pages that may not have been requested yet into main memory. The currently required page amount is also show as designated working set. If the amount is in main memory there is no page access error. The working set algorithm attempts to achieve this goal.
The currently required page amount is also shown as designated working set. The last $d$ references are considered the working set which is determined from them. In following example $d = 10$
![[Pasted image 20240218220636.png]]
The working set algorithm is based on the following assumption, the required pages the working set only change slowly, the pages needed in the near future are most likely in the vicinity of the one just addressed. The procedure makes it necessary to remember the amount of pages used. Prepaging is used to preventatively store the expected pages. Storage of probably required pages of a sleeping process before it is active again.
#### Memory allocation strategies
The aim of memory allocation is to avoid external and internal fragmentation,. The occupancy of the main memory allocation tables is managed. The implementation can be done for example as bit map for each frame is assigned a bit 0 means free and 1 means occupied. Then free main memory areas can be recognised by zeros next to each other
Bitmap -> `1110000000011111111000001111111111000100`
**Sequential search** - is first suitable area is assigned (first fit)
**Optimal search** - searches for most suitable area in order to avoid fragmentation as much as possible (best fit)
**Buddy technique** - gradual halving of memory upon a main memory request.
Memory allocation is searching for the smallest suitable area then halving the found area until the desired area just fits into a partial area. When main memory is released, frames are combined again then it connects the returned area with all free neighbouring areas (all their parents) and make it into one area
![[Pasted image 20240218221355.png]]
Reduces fragmentation at the expense of increase internal fragmentation.
#### Discharging
Unloading strategies or cleaning, specifies the time when a modified page is written to the paging area. Variants are:
- **Demand cleaning** - if necessary page pros: page long in main memory con: delay when changing pages
- **Precleaning** - is preemptive writeback when there is time, advantage is that frames usually are available
- **Page buffering** - is implementation of gradual precleaning through list management, modified list are buffered and unmodified list are approved for unloading.
#### Memory management in Unix
Previous Unix system up to BSD 3 has used exclusively Swapping, which is a process swapper (daemon) with PID 1 that took over the swapping at certain events or cyclically at intervals of several seconds. From BSD 3 onwards Demand paging was added, and all other Unix derivatives have adopted it. Then page daemon was introduced which was process with PID 2. In the page daemon the page replacement algorithm is according to one clock algorithm implemented.
For 32-bit Linux - virtual addresses with a length of 32 bits and 1GiB for the kernel and the page tables remaining 3GiB for the user process
For 64-bit Linux - 48bit addresses and size 2 addresses space 48
Address translation in linux used three-stage page tables up to version 2.6.10 even from Linux 2.6.11 four stage page tables. Possibly mapping to a two level or other page table if hardware can't do it.
**Memory management strategies in Linux**
- **Fetch policy** - as a storage strategy demand paging is used without prepaging and without working set
- **Replacement and cleaning strategy** - was done via kind of clock page algorithm, which managing multiple lists with page frames using page buffering. Multiple kernel threads for list manipulations are used such as:
- **kswapd** - which checks periodically the list and restocks if necessary
- **bdflush** || **pdflush** - periodically writes changed ("dirty") pages to the paging area
- **Placement policy** - is memory allocation which occurs via **buddy technique**
Address space allocation for 32-bit architecture:
![[Pasted image 20240218222706.png]]
Memory management under Linux 32-bit address translation using an example where virtual 32-bit addresses here a tree-level page tables
![[Pasted image 20240218222754.png]]
#### Memory management in Windows
Virtual addresses with 32 bits length so 4GiB address space and 3 GiB of them for user process and the rest for the kernel, it tries to use linear address space without segmentation. Page size depending on processor architecture
| Processor architecture | Size of the small page | Size of the Large Page |
| ---- | ---- | ---- |
| X86 | 4 KiB (12 bit offset) | 4 MiB (22 bit offset) |
| x64 (AMD) | 4 KiB (12 bit offset) | 2 MiB (21 bit offset) |
| IA64 (Intel) | 8 KiB (13 bit offset) | 16 MiB (24 bit offset) |
**Memory management strategies in Windows**
- **Fetch policy** - early use of demand paging, from Win 2003 onward used Prepaging with own working set procedure
- **Replacement and cleaning strategy** - combination of local and global replacement strategy. FIFO on multiprocessor machines while clock page for signle processor machines. Multi swap list are managed and multiple threads process the list
- **Placement policy** - not explained in detail
Building a virtual address space
![[Pasted image 20240218223640.png]]
**Windows working set** - each process has a working set with a changable size min 50 max 345 depending on available memory. In even of a page fault the maximum working set of a process is not stored. Exception, one process pages heavily and others don't then the paged process is increased but not more then the available page frames 512 so that still leaves a few pages frames free.
Working set manage thread is the one that works cyclically which also tries to get free side frames using a complicated procedure. A page frame is either assigned to one or more working sets or exactly one of four lists in which windows manages free page frames
**Page buffering and List management**
List in detail:
- **Modified page list** - are pages that have already been selected for page replacement but have not yet been swapped out and are still assigned to the using process
- **Standby page list** - like modified page list, the difference that they are clean or have a valid copy in the paging area
- **Free Page list** - is where frames that are already clean and are no longer assigned to any process
- **Zero Page list** - like free page list and additionally initialized with zeros/
- **Bad Ram Page List** - Another list contains defective memory pages
**Special system threads**
- **Swapper thread** - runs every few seconds looks for processes that have not been doing anything for while (or are idle) and puts their frames in the modified or standby page list
- **Modified Page Writer Thread** - runs periodically and ensure enough clean pages by switching from the modified page list to the standby page list (backed yp to disk beforehand)
- **Zero page thread** - runs at low priority, it deletes frames from the free page list and places them in the zero page list
![[Pasted image 20240218224402.png]]