Linux don't deal with physical memory directly. It works on views of physical provided by special hardware. This special hardware is called the memory management unit, or MMU. The MMU swaps views of physical memory according to kernel's needs.
The MMU provides such views by stiching chunks of physical memory into a hypethetical flat range composed of 32-bit number or 64-bit number (or a more common term, "mapping"). The kernel only see this hypethetical flat space presented by the MMU.
When accessing something, whatever is accessing the memory hands the virtual address to the MMU. Seeing a virtual address, the MMU redirects the access to this virtual address to actual location in the physical memory, or the physical address. The indirection is achieved by creating internal mapping tables. This mapping provides a view to physical memory. This view of memory abstracted by the MMU is called the virtual address space.
Being able to present different views to the same physical memory makes abstraction easier. There's a single physical address space, representing memory shared by the whole kernel, and there could be multiple views to this physical address space.
For example, each process can have its own view to the memory, think of it having the entire address space, even though access to most part of the address space will be redirected to nothing. This makes processes don't stomp on each other.
The MMU could also lie to the kernel. It could map things other than physical memory into that flat space. For example, it could map registers from peripherals into that space, tricking kernel into thinking that it is memory and do reading or writing โ when in fact it is register from other device.
It could also lie to the kernel in a more deceptive way. It may redirect a virtual address into nothing. This may happen for many reasons. When this happens, the MMU raises an exception called page fault, and the kernel has to handle this exception. Sometimes, adding extra mapping is enough.
In theory this mapping could be very arbitrary, but this also makes it hard to manage. Rather, each of the virtual address space, or each of such views to a memory, has a very specific format. You can only peek through specific parts of the physical memory through a certain region of the virtual address space. This is called the layout of virtual address space.
Again, this is a design for easier management (and more often than not, historical reason), even though theoretically it could be very arbitrary.
Source: Bootlin
In 32-bit systems, the virtual space is addressed by 32-bit addresses, so there are 4G possible addresses, from 0x00000000 to 0xFFFFFFFF. The side towars 0xFFFFFFFF is often called the "top" side. Note that the top and bottom is sometimes subjective. In some figure, it may be upside down.
Each virtual address space is divided into two halves. The top half of the address space is for the kernel, the bottom half is for the user space. The most common proportion is to split into 1G/3G division, although 2G/2G and 1G/3G is possible in the kernel config.
The top half of the virtual address space is for the kernel. This is where memory used by the kernel is mapped, and where to access kernel code. Kernel drivers being part of the kernel are also mapped here.
Inside of this 1G address space, the beginning 896 MB is designed to be mapped to the beginning of the physical memory. Poking to the offset in this area of virtual address space will poke into the same offset from the beginning of the physical memory. This way of mapping is sometimes called the 1:1 mapping to physical memory.
This part of the address space doesn't occupy the whole 1G address space, though. A portion of the virtual address space is reserved for more flexible use.
One of the scenarios is to access memory other than those being 1:1 mapped. Imagine that there is more than 1G of memory, then even the whole top half address space is used, it seems impossible to access other parts of memory in this way. Instead, part of the address space is reserved for accessing physical memory other than that mapped by the 1:1 mapping region. This is done by mapping other parts of physical memory into this reserved address space.
Another scenario is to access the peripherals. Memory is not the only thing that can be accessed by virtual address space. Registers from the peripherals can also be mapped to the virtual address space, so that access to those registered can be simplified to normal memory access. This way of accessing peripheral devices is called the memory mapped I/O, or MMIO. This is another aspect where flexibility is needed.
Note that here the ZONE_HIGHMEM refers to the memory that are not 1:1 mapped into kernel space, and the part where it is 1:1 mapped is called, well, ZONE_NORMAL. See definitions in include/linux/mmzone.h
The bottom 3G is for a process, this is also the portal to access segments of an ELF like text, data, rodata. This is also where the memory mapped file is.
In the world of 64-bit systems, even though it is named "64-bit" system, both the actual physical and virtual addresses do not usually support 64-bit, simply because actual 64-bit addressing is unnecessarily large. For example, 64-bit Intel processors can handle at most 57 bit linear to 52 bit physical addresses by 5-level Paging.
For such addresses to be valid, the rest of the bit should be either all 1 or all 0. Those valid addresses are called the canonical addresses. This naturally divides the virtual address space into two halves, according to whether the unused bits in canonical addresses are all 1 or all 0. The former are used by the kernel, and the latter are for the user space.
For x86-64 system, accessing the "non-canonical" address will emit General Protection fault (GP). In those events, the kernel will also compain about non-canonical address, like the one in this mailing list thread:
The beginning 128T in the top half of the address space is reserved for the 1:1 mapping. Because this is very often too huge, it can map the whole physical memory in the 1:1 fashion. Another 128T is for the MMIO and other flexible use.
Note that in this case, the ZONE_HIGHMEM no longer needed. For example, it doesn't exists in the x86_64 system. See its kconfig
in arch/x86/Kconfig
. Because highmem by definition is the physical memory not 1:1 mapped in the kernel. Now that the 1:1 mapping region is so large that there's no physical memory that cannot fit into this area of virtual address space. This eliminates the need for ZONE_HIGHMEM.
This part of the address space is rather similar to the 32-bit case.
In user space, the memory allocation is implemented in the user space. For example, the glibc implementation of malloc()
, or other allocator like jemalloc.
Note that the memory allocator is implemented in the user space. For example, in glibc, memory allocated by the brk()
or sbrk()
are carved into chunks and bins. Chunks of proper size are given way on allocation. Under the hood, when a process needs to allocate memory, anonymous pages are set up when system calls that grow heaps are called.
Note that an anonymous page refers to a page that is not backed by actual files. See Anonymous Memory in the Linux Kernel Documentation. In contrast, pages whose contents are from files are called page caches. Also note that cache here doesn't mean L1/L2/L3 cache in the CPU hardware. Instead, it is a specific term for pages that stored the contents of files temporarily. See Page cache.
(Source)
The kernel also maintains its own allocators. The page allocators allocates physically contiguous memory for number of power of 2 pages, the slab allocator (currently only SLUB remains) create slab caches with existing pages. Finally, the kmalloc()
allocator is made from multiple slab caches of different chunk sizes created by the slab allocator. It determines which slab cache to use depending on the size the called requests, and returns a piece of slab from that slab cache.
For now, we take "physical memory" as "physical address space". We'll take it for granted that all memory and memory-mapped IO devices can somehow all line up tidily in a big flat space, even though this definitely requires effort to setup. Physical address space is hardware specific. Each vendor decides in what address can on access a specific resource. Other than vendor decisition, since the firmware or bootloader can also decide what the kernel being loaded should see, they could also manipulate the information seen by te kernel, including that toward physical memory. There is an introduction in Mentorship Session: Debugging Linux Memory Management Subsystem explaining how memory is detected during boot.
Physical memory are divided into ZONEs by the kernel according to their properties. See How the Linux kernel divides up your RAM for explanation.
Division of virtual memory in arm32:
For the x86_64: