Qemu

What is qemu

QEMU (Quick Emulator) is a free and open-source emulator. It emulates a computer's processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems. It can interoperate with Kernel-based Virtual Machine (KVM) to run virtual machines at near-native speed. QEMU can also do emulation for user-level processes, allowing applications compiled for one processor architecture to run on another.

A guest operating system running in the emulated computer accesses these devices, and runs as if it were running on real hardware. For instance, you can pass an ISO image as a parameter to QEMU, and the OS running in the emulated computer will see a real CD-ROM inserted into a CD drive.

QEMU can emulate a great variety of hardware from ARM to Sparc, but Proxmox VE is only concerned with 32 and 64 bits PC clone emulation, since it represents the overwhelming majority of server hardware. The emulation of PC clones is also one of the fastest due to the availability of processor extensions which greatly speed up QEMU when the emulated architecture is the same as the host architecture.

QEMU supports the emulation of various architectures, including x86, ARM, PowerPC, RISC-V, and others.

QEMU has multiple operating modes

  1. User-mode emulation : In this mode QEMU runs single Linux or Darwin/macOS programs that were compiled for a different instruction set. System calls are thunked for endianness and for 32/64 bit mismatches. Fast cross-compilation and cross-debugging are the main targets for user-mode emulation.

  2. System emulation: In this mode QEMU emulates a full computer system, including peripherals. It can be used to provide virtual hosting of several virtual computers on a single computer. QEMU can boot many guest operating systems, including Linux, Solaris, Microsoft Windows, DOS, and BSD; it supports emulating several instruction sets, including x86, MIPS, 32-bit ARMv7, ARMv8, PowerPC, RISC-V, SPARC, ETRAX CRIS and MicroBlaze.

  3. Hypervisor Support : In this mode QEMU either acts as a Virtual Machine Manager (VMM) or as a device emulation back-end for virtual machines running under a hypervisor. The most common is Linux's KVM but the project supports a number of hypervisors including Xen, Apple's HVF, Windows' WHPX and NetBSD's nvmm.

KVM - QEMU

KVM-QEMU is a virtualization that combines QEMU with KVM in a QEMU style that will connect with the KVM module to become a form of Full virtualization.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

The combination of KVM and QEMU provides a form of full virtualization. This means that the guest operating system running inside the VM is unaware that it's running in a virtualized environment. It behaves as if it were running directly on the physical hardware.

So, while QEMU plays a crucial role in the overall virtualization process, KVM is the primary component responsible for the full virtualization capabilities.

Management Tools

Management Tools

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Include:

  • virsh
  • virt-manager
  • Openstack
  • ovirt

From the above picture we can understand: For each type of virtualization such as Kvm, Xen, .. there will be a Libvirt process running to control the virtualization types and provide APIs for tools such as virsh, virt-manager, Openstack , ovirt can communicate with KVM-Qemu through livbirt.

Libvirt is a library, allowing you to use python and other programming languages to configure virtual machines. Virsh is a toolkit which works in terminal to monitor and configure virtual machine settings. Virt-manager is VMware player like GUI as an alternative to virsh and it uses libvirt.

Relationship between KVM and QEMU

The answer of Kaustubh Pradhan for the question How do KVM and QEMU work together?

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

KVM - resides in the Linux kernel as a loadable module. Once loaded, KVM converts the Linux kernel into a type-1 hypervisor aka bare-metal hypervisor. KVM virtualization uses the Linux kernel as its hypervisor (VM is essentially a process). However, it depends on the Intel-VT and AMD-V virtualization extensions on Intel and AMD respectively for hardware assists to enable robust virtualization. Working in concert with these extensions, KVM helps deliver a better virtualization experience with higher throughput of almost near-zero latency. Thus all the VMs (read process) can run without any performance or compatibility hit, as if it was running natively on a dedicated CPU. Also, because of the the aforementioned extensions the VMs have a greater awareness of the capabilities of the underlying hardware platform. Therefore, is fair to say that KVM offers hardware virtualization in its sincerest and best form.

QEMU - On the other hand resides in the user space and provides system emulation including the processor and various peripherals. Typically, QEMU is deployed along with KVM as an in-kernel accelerator where KVM executes most of the guest code natively, while QEMU emulates the rest of the machine (peripherals) needed by the guest. In places where the VM has to talk to external devices, QEMU uses passthrough.

When used together, KVM provides the low-level virtualization capabilities, while QEMU handles the high-level emulation and management of the virtual machines. The combination of KVM and QEMU allows for efficient, hardware-accelerated virtualization on Linux systems, with QEMU providing the user-facing tools and customization options for the virtual machines.

The typical workflow is:

  1. KVM provides the core virtualization infrastructure and hardware acceleration.
  2. QEMU is used to create and manage the virtual machine, including configuring its hardware components.
  3. QEMU leverages the KVM kernel module to run the virtual machine, taking advantage of the hardware virtualization capabilities.

This combination of KVM and QEMU allows for a flexible and performant virtualization solution on Linux, making it a popular choice for server virtualization, cloud computing, and other use cases where efficient resource utilization and isolation are important.

How does QEMU interact with KVM?

kvm-all.c

  1. Obtain KVM handle

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

s->fd = qemu_open("/dev/kvm", O_RDWR);
  1. Create virtual machine and obtain the virtual machine handle

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

s->vmfd = kvm_ioctl(s, KVM_CREATE_VM, 0);
  1. Map memory for virtual machine and initiate PCI and signals

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
  1. Map the virtual machine image to memory. This process is like booting physical virtual machine

  2. Create vCPU and allocate space for vCPU

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

  1. Thread goes into a loop. The exit reason is captured and related command is executed

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

In this loop

do { if (env->exit_request) { dprintf("interrupt exit requested\n"); ret = 0; break; } if (env->kvm_state->regs_modified) { kvm_arch_put_registers(env); env->kvm_state->regs_modified = 0; } kvm_arch_pre_run(env, run); qemu_mutex_unlock_iothread(); ret = kvm_vcpu_ioctl(env, KVM_RUN, 0); qemu_mutex_lock_iothread(); kvm_arch_post_run(env, run); if (ret == -EINTR || ret == -EAGAIN) { dprintf("io window exit\n"); ret = 0; break; } if (ret < 0) { dprintf("kvm run failed %s\n", strerror(-ret)); abort(); } kvm_run_coalesced_mmio(env, run); ret = 0; /* exit loop */ switch (run->exit_reason) { case KVM_EXIT_IO: dprintf("handle_io\n"); ret = kvm_handle_io(run->io.port, (uint8_t *)run + run->io.data_offset, run->io.direction, run->io.size, run->io.count); break; case KVM_EXIT_MMIO: dprintf("handle_mmio\n"); cpu_physical_memory_rw(run->mmio.phys_addr, run->mmio.data, run->mmio.len, run->mmio.is_write); ret = 1; break; case KVM_EXIT_IRQ_WINDOW_OPEN: dprintf("irq_window_open\n"); break; case KVM_EXIT_SHUTDOWN: dprintf("shutdown\n"); qemu_system_reset_request(); ret = 1; break; case KVM_EXIT_UNKNOWN: dprintf("kvm_exit_unknown\n"); break; case KVM_EXIT_FAIL_ENTRY: dprintf("kvm_exit_fail_entry\n"); break; case KVM_EXIT_EXCEPTION: dprintf("kvm_exit_exception\n"); break; case KVM_EXIT_DEBUG: dprintf("kvm_exit_debug\n"); #ifdef KVM_CAP_SET_GUEST_DEBUG if (kvm_arch_debug(&run->debug.arch)) { gdb_set_stop_cpu(env); vm_stop(EXCP_DEBUG); env->exception_index = EXCP_DEBUG; return 0; } /* re-enter, this exception was guest-internal */ ret = 1; #endif /* KVM_CAP_SET_GUEST_DEBUG */ break; default: dprintf("kvm_arch_handle_exit\n"); ret = kvm_arch_handle_exit(env, run); break; } } while (ret > 0);

QEMU Installation

wget https://download.qemu.org/qemu-9.0.2.tar.xz tar xvJf qemu-9.0.2.tar.xz cd qemu-9.0.2 sudo apt update sudo apt-get install libgtk-3-dev ./configure --enable-gtk sudo make install

QEMU example

qemu-x86_64 and qemu-i386

# install gcc to build hello world program sudo apt update sudo apt install build-essential

hello.c

#include <stdio.h> int main() { printf("Hello Broder...\n"); return(0); }

image

Running hello world on qemu

# Install qemu user mode environment sudo apt install qemu-user

Check what are installed

image

Keep in mind that the naming convention of qemu user mode command is qemu-arctecture where architecture can be arm (ARM 32bit version), aarch64 (ARM 64bit), i386 (x86 32bit), x86_64 (x86 64 bit), etc. In next section, we will introduction qemu system mode. The naming convention is qemu-system-architecture. You can see what the difference is.

Let's see how we can execute the hello programs on qemu.

image

qemu-aarch64

sudo apt install gcc-aarch64-linux-gnu

image

image

qemu-system-i386

To run a virtual machine in QEMU system mode, you indeed require an OS image.

Choosing Linux 0.11 as your OS image for learning purposes is a great choice. It's relatively simple, allowing you to grasp the core concepts of an operating system without getting overwhelmed by complexities.

  • Image format: The OS image should be in a format compatible with QEMU (e.g., raw, qcow2).
  • Bootloader: You might need a bootloader (like GRUB or LILO) configured to boot the kernel from the image.
  • Hardware emulation: QEMU provides the necessary hardware emulation for the OS to function, including CPU, memory, disk, and network devices.

Check Old Linux and following the Linux Ancient Resources to download qemu-images/Linux 0.11 on qemu-12.5.i386.zip

Once downloaded and unziped, run command

# use the following command, reference from file `linux.bat` qemu-system-i386 -hda linux-0.11-devel-060625.qcow2 -no-reboot -m 16M

image

Then enter key 1 to boot from the first device.

image

image

Now Linux 0.11 is running on qemu-system-i386. You may find many commands not found; however, there are some available commands.

QEMU - code flow

Following the source code in Qemu v1.6.1 or QEMU v1.6.1 source

WI20w

main()

  • In QEMU, the main() function resides in vl.c. It's like the grand conductor, orchestrating the entire show.
  • When you fire up QEMU, this function kicks into action. Its primary job? Setting up the virtual machine environment. Think of it as preparing the stage before the actors (virtual CPUs) start their performance.
  • Here's what it does:
    • Configures the VM: Sets RAM size, devices, CPU count, and other specs based on your virtual machine configuration.
    • Initializes various subsystems: QEMU has a lot going on—emulated hardware, memory management, device models, and more. The main() function gets everything ready.
    • Branches out: Once the setup is complete, execution branches out to other files (like /cpus.c, /exec-all.c, /exec.c, and /cpu-exec.c). These files handle the nitty-gritty of CPU execution, dynamic translation, and other magic.
  • In summary, the main() function is the architect laying the foundation for your virtual world.

QEMU is like a symphony—a harmonious blend of orchestration, translation, and emulation. So, next time you fire up QEMU, tip your hat to the main() function—it's the maestro behind the scenes! 🎩🌟

(1) Qemu source code flow - Stack Overflow.
(2) on the Virtualization Stack.
(3) The QEMU build system architecture — QEMU documentation

qemu_tcg_cpu_thread_fn()

  1. Multithreading Magic (MTTCG):

    • Imagine QEMU as a bustling theater with multiple stages. Each stage represents a guest thread or virtual CPU (vCPU). Now, the qemu_tcg_cpu_thread_fn function is like the director backstage, coordinating the actors (threads) for a grand performance.
    • Purpose: It enables multithreading for the Tiny Code Generator (TCG) in system emulation mode. Specifically, it allows one host thread per guest thread or vCPU.
    • When Introduced: This magic was first unveiled in QEMU 2.9 for Alpha and ARM architectures. Since then, work has been ongoing to extend full multithreading support to other system emulations.
  2. How It Works:

    • Previously, QEMU scheduled multiple vCPUs in a single thread, executing them in a round-robin fashion. But MTTCG changes the game:
      • Each vCPU gets its own host thread. Think of it as giving each actor their own dressing room.
      • When a guest thread (vCPU) needs translation (from guest instructions to host code), it steps onto its designated stage (host thread).
      • The TCG dance begins: Instructions are translated, and the show goes on!
    • This parallelism boosts performance, especially when you have a bustling VM with many vCPUs.
  3. Enabling MTTCG:

    • Usually, you don't need to explicitly enable MTTCG—it's automatic if:
      • The guest architecture defines TARGET_SUPPORTS_MTTCG.
      • The host architecture's TCG target mode (TCG_TARGET_DEFAULT_MO) supports it.
    • But beware: Forcing MTTCG without meeting these conditions might lead to strange behavior (like actors improvising wildly).
  4. Porting a Guest Architecture:

    • Before MTTCG can waltz with a guest, some steps are essential:
      • Translate atomic/exclusive instructions correctly.
      • Handle barrier instructions (like tcg_gen_mb) gracefully.
      • Audit instructions that modify system state (e.g., taking BQL).
      • Manage MMU (memory management) functions—think of it as choreographing the memory ballet.
      • Ensure power/reset sequences are in sync (like synchronized swimmers).
  5. Testing and Further Work:

    • Comprehensive tests are crucial. Imagine it's dress rehearsal—exercising all the corners of system emulation behavior.
    • Strong-on-weak memory consistency (emulating x86 on an ARM host) is still a work in progress.
  6. People Behind the Curtain:

    • Fred Konrad (The Original MTTCG Patch Set)
    • Alex Bennée (ARM Testing and Base Enabling)
    • Alvise Rigo (LL/SC Work)
    • Emilio Cota (QHT, cmpxchg atomics)
  7. Reading Material:

    • If you're curious, Emilio's CGO17 paper slides provide extra backstage insights: Read here.

So, next time you see MTTCG in action, give a nod to the qemu_tcg_cpu_thread_fn—it's the conductor orchestrating the multithreaded symphony! 🎭🌟

(1) Features/tcg-multithread - QEMU.
(2) TCG Intermediate Representation — QEMU documentation
(3) Cache Modelling TCG Plugin - QEMU.
(4) Cross-ISA Machine Emulation for Multicores

tcg_exec_all()

  1. TCG (Tiny Code Generator) Overview:

    • Before we dive into specifics, a quick recap: TCG is QEMU's dynamic translation backend. It's responsible for converting guest instructions (from the virtual machine) into host-specific machine code.
    • Think of TCG as the magical bridge between the virtual world (guest architecture) and the real world (host architecture).
  2. Purpose of tcg_exec_all:

    • The tcg_exec_all function plays a pivotal role in executing translated code. Let's break it down:
      • Translation Blocks (TBs): When QEMU translates guest instructions, it creates chunks called TBs. Each TB contains a sequence of translated instructions.
      • TB Execution: When a guest instruction needs execution, QEMU looks up the corresponding TB (if already translated) or translates it on-the-fly.
      • Chaining TBs: Here's where tcg_exec_all shines:
        • Imagine a guest program with multiple instructions. These instructions form a chain of TBs.
        • Instead of returning to the main loop after executing each TB, QEMU can directly jump to the next TB.
        • This chaining avoids unnecessary overhead and accelerates execution.
        • It's like a relay race: One TB passes the baton (execution context) to the next TB.
      • Interrupt Handling: Importantly, tcg_exec_all ensures that interrupt handling (like masked interrupts) happens correctly. If CPU state changes (e.g., privilege level), it exits the TB execution loop to re-evaluate interrupt conditions.
  3. Direct Block Chaining Mechanisms:

    • QEMU employs two mechanisms for chaining TBs directly:
      • lookup_and_goto_ptr:
        • Calls helper_lookup_tb_ptr, which searches for an existing TB matching the current CPU state.
        • If found, it branches directly to that TB; otherwise, it returns to the main loop.
        • Efficient and avoids unnecessary retranslation.
      • goto_tb + exit_tb:
        • Used for branching within the translation code.
        • It's like saying, "Hey, let's jump to the next TB, and by the way, handle any state changes or interrupts."
        • Ensures correctness while maintaining performance.
  4. Why Is This Important?

    • Performance! By chaining TBs directly, QEMU minimizes context switches and retranslation.
    • It's like having a well-rehearsed play: Actors move seamlessly from one scene (TB) to the next without leaving the stage (main loop).
  5. Fun Fact:

    • The maximum size of a TB is 512 instructions. Beyond that, QEMU splits it into smaller chunks for efficient execution.
  6. TL;DR:

    • tcg_exec_all orchestrates TB execution, chaining them efficiently.
    • It's the conductor ensuring the translated symphony plays smoothly.

If QEMU were a theater, tcg_exec_all would be the stage manager, making sure the actors (TBs) hit their cues!* 🎭🌟

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) Translator Internals — QEMU documentation.
(3) Qemu source code flow - Stack Overflow.
(4) Features/tcg-multithread - QEMU.
(5) Features/TCG - QEMU.
(6) Multi-threaded TCG — QEMU documentation.

tcg_cpu_exec()

  1. The Dynamic Translation Dance (TCG):

    • Imagine QEMU as a grand theater where virtual CPUs (vCPUs) perform intricate routines. Each vCPU executes guest instructions, but they're written in a language foreign to the host system.
    • Enter TCG—the Tiny Code Generator. Its mission? To translate these guest instructions into host-specific machine code. Think of it as the interpreter backstage, whispering translations to the actors.
  2. tcg_cpu_exec Takes the Stage:

    • Purpose: The tcg_cpu_exec function orchestrates this translation ballet. It's like the choreographer ensuring the steps flow seamlessly:
      • When a vCPU needs execution, it steps onto the stage (the current Translation Block or TB).
      • The TB contains a sequence of translated instructions (host code).
      • tcg_cpu_exec directs the vCPU through this TB, executing the translated dance moves.
    • TB Size: Each TB has a maximum size (512 instructions). Beyond that, QEMU splits it into smaller chunks for efficiency.
  3. CPU State Optimizations:

    • vCPUs have internal states (privilege level, segment bases, etc.). To speed things up:
      • The translation phase assumes that certain state information won't change within a TB.
      • If state changes (e.g., privilege level), a new TB is generated.
      • The old TB rests backstage until its state matches again.
      • For example, if segment bases are zero, no need to generate segment base additions.
      • It's like telling the dancers: "Stay in character until the scene changes!"
  4. Direct Block Chaining (Fancy Footwork):

    • After executing a TB, QEMU seeks the next TB based on the simulated Program Counter (PC) and CPU state.
    • In the basic form:
      • Exit the current TB.
      • Go through the TB epilogue (like a backstage exit).
      • Return to the main loop to find the next TB (if not already in memory).
    • But wait! We can waltz faster:
      • lookup_and_goto_ptr:
        • Finds an existing TB matching the current CPU state.
        • If found, directly branches to that TB; otherwise, returns to the main loop.
      • goto_tb + exit_tb:
        • A dance move: Branch to the next TB or exit gracefully.
        • Ensures interrupt handling (like unmasking interrupts) happens correctly.
  5. Why Is This Important?

    • Performance! Chaining TBs directly avoids unnecessary round trips to the main loop.
    • It's like a well-rehearsed pas de deux—fluid transitions between TBs.
  6. Fun Fact:

    • The TCG Front-end (FE) converts guest code to internal intermediate code in a TB.
    • The TCG Back-end (BE) translates intermediate code into host instructions.
    • It's a duet of translation magic! 🎭✨

You imagine it as the lead dancer—guiding the vCPUs through their translated routines! 💃🌟

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) Documentation/TCG - QEMU.
(3) Emulation — QEMU documentation.
(4) Translator Internals — QEMU documentation.
(5) A deep dive into QEMU: The Tiny Code Generator (TCG), part 1.
(6) Features/tcg-multithread - QEMU.
(7) Documentation/TCG/frontend-ops - QEMU.

cpu_exec()

  1. The CPU and Execution:

    • Before we dive into specifics, let's step back and appreciate the grand theater of computing.
    • Imagine a CPU (Central Processing Unit) as the star performer—the heart and soul of any computer system.
    • Its purpose? To process data, execute instructions, and make the magic happen.
  2. cpu_exec Takes the Stage:

    • Now, let's focus on QEMU's cpu_exec function:
      • Mission: It's the conductor of the CPU orchestra. When QEMU emulates a virtual machine, this function orchestrates the execution of guest instructions.
      • Translation Blocks (TBs): Think of TBs as choreographed routines. Each TB contains a sequence of translated instructions (host code).
      • Dance Steps:
        • The cpu_exec function:
          • Checks if there's an existing TB for the current guest instruction.
          • If not, it translates the instruction (from guest architecture to host architecture) and creates a new TB.
          • Executes the TB, moving the vCPU forward.
          • Handles state changes (like privilege level shifts) gracefully.
        • It's like leading the vCPU through a well-rehearsed dance—step by step.
  3. Direct Block Chaining (Efficient Moves):

    • After executing a TB, QEMU seeks the next one:
      • lookup_and_goto_ptr:
        • Finds an existing TB matching the current CPU state.
        • If found, directly branches to that TB; otherwise, returns to the main loop.
      • goto_tb + exit_tb:
        • A quick spin: Branch to the next TB or exit gracefully.
        • Ensures interrupt handling (like unmasking interrupts) happens correctly.
  4. Why Is This Important?

    • Performance! Chaining TBs directly avoids unnecessary backstage costume changes.
    • It's like a CPU ballet—fluid transitions between translated routines.
  5. Fun Fact:

    • The TCG (Tiny Code Generator) plays a crucial role in creating these TBs.
    • It's the scriptwriter, translating guest instructions into the universal language of the host CPU.

You imagine it as the lead dancer, guiding the vCPU through its translated routines! 💃🌟

(1) The CPU and the fetch-execute cycle What is the purpose of the CPU? - BBC.
(2) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(3) QEMU/cpu-exec.c at master · mp-lee/QEMU · GitHub.
(4) How Does a Computer Execute Code? - MUO.

tb_find_fast()

  1. Translation Blocks (TBs) and Their Importance:

    • Imagine QEMU as a theater where virtual CPUs (vCPUs) perform intricate routines. Each vCPU executes guest instructions, but these instructions are written in a language foreign to the host system.
    • Enter Translation Blocks (TBs): These are choreographed routines—sequences of translated instructions (host code) that correspond to guest instructions.
    • TBs are like dance routines: Once translated, they're cached for efficient execution.
  2. The Role of tb_find_fast:

    • Mission: The tb_find_fast function is like a backstage guide. When a vCPU needs to execute an instruction, it checks whether a corresponding TB is already in the cache.
    • Fast Search:
      • tb_find_fast uses a hash function value to find the index of the TB in the cache.
      • If the TB is valid (i.e., still relevant), it's a quick win—the vCPU can directly jump to that TB.
      • But what if the TB isn't valid anymore? That's where the next step comes in.
  3. Fallback to tb_find_slow:

    • If the fast search doesn't yield a valid TB (maybe it was invalidated due to changes in CPU state), tb_find_fast gracefully falls back to tb_find_slow.
    • tb_find_slow performs a sequential search through the cache to locate the correct TB.
    • It's like saying, "Okay, let's check every dressing room until we find the right costume for this scene."
  4. Why Is This Important?

    • Performance! Fast TB lookup avoids unnecessary retranslation.
    • It's akin to a well-rehearsed play: The actors (vCPUs) know exactly which TB to step into without wasting time.

You imagine it as the backstage coordinator, swiftly guiding the vCPUs to their cached routines! 🎭🌟

Learn more

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) Translator Internals — QEMU documentation.
(3) TCG Intermediate Representation — QEMU documentation.

tb_find_slow()

  1. Translation Blocks (TBs) and Their Importance:

    • Imagine QEMU as a theater where virtual CPUs (vCPUs) perform intricate routines. Each vCPU executes guest instructions, but these instructions are written in a language foreign to the host system.
    • Enter Translation Blocks (TBs): These are choreographed routines—sequences of translated instructions (host code) that correspond to guest instructions.
    • TBs are like dance routines: Once translated, they're cached for efficient execution.
  2. The Role of tb_find_slow:

    • Mission: The tb_find_slow function is like a backstage guide. When a vCPU needs to execute an instruction, it checks whether a corresponding TB is already in the cache.
    • Fast Search:
      • tb_find_fast uses a hash function value to find the index of the TB in the cache.
      • If the TB is valid (i.e., still relevant), it's a quick win—the vCPU can directly jump to that TB.
      • But what if the TB isn't valid anymore? That's where tb_find_slow steps in.
    • Fallback to Sequential Search:
      • If the fast search doesn't yield a valid TB (maybe it was invalidated due to changes in CPU state), tb_find_slow gracefully falls back to a sequential search.
      • It meticulously examines each dressing room (TB) until it finds the right costume for the current scene (instruction).
      • This ensures correctness even when TBs need refreshing due to state changes.
  3. Why Is This Important?

    • Performance! Fast TB lookup avoids unnecessary retranslation.
    • But when the cache needs a refresh, tb_find_slow ensures we don't miss our cues.
    • It's akin to a well-rehearsed play: The actors (vCPUs) know exactly which TB to step into without wasting time.

You imagine it as the backstage coordinator, meticulously searching for the right costumes in the cache! 🎭🌟

Learn more

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) Translator Internals — QEMU documentation.
(3) arm - How to use QEMU's simple trace backend? - Stack Overflow.

cpu_tb_exec()

  1. Translation Blocks (TBs) and Their Importance:

    • Imagine QEMU as a grand theater where virtual CPUs (vCPUs) perform intricate routines. Each vCPU executes guest instructions, but these instructions are written in a language foreign to the host system.
    • Enter Translation Blocks (TBs): These are choreographed routines—sequences of translated instructions (host code) that correspond to guest instructions.
    • TBs are like dance routines: Once translated, they're cached for efficient execution.
  2. The Role of cpu_tb_exec:

    • Mission: The cpu_tb_exec function is like the lead dancer. When a vCPU needs to execute an instruction, it steps onto the stage (the current TB).
    • TB Execution:
      • The TB contains a sequence of translated instructions.
      • cpu_tb_exec directs the vCPU through this TB, executing the translated dance moves.
      • It handles state changes (like privilege level shifts) gracefully.
    • Direct Block Chaining (Efficient Moves):
      • After executing a TB, QEMU seeks the next one:
        • tb_find_fast: Tries a fast lookup using a hash function value. If the TB is valid, it's a quick win—the vCPU jumps directly to that TB.
        • tb_find_slow: If the fast search fails (maybe due to state changes), it gracefully falls back to a sequential search. It meticulously examines each dressing room (TB) until it finds the right costume for the current scene (instruction).
  3. Why Is This Important?

    • Performance! Chaining TBs directly avoids unnecessary retranslation.
    • It's like a well-rehearsed pas de deux—fluid transitions between translated routines.

You imagine it as the lead dancer, guiding the vCPU through its translated routines! 💃🌟

Learn more

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) Translator Internals — QEMU documentation.
(3) TCG Intermediate Representation — QEMU documentation.

tb_gen_code()

  1. Translation Blocks (TBs) Recap:

    • Imagine QEMU as a grand theater where virtual CPUs (vCPUs) perform intricate routines. Each vCPU executes guest instructions, but these instructions are written in a language foreign to the host system.
    • Enter Translation Blocks (TBs): These are choreographed routines—sequences of translated instructions (host code) that correspond to guest instructions.
    • TBs are like dance routines: Once translated, they're cached for efficient execution.
  2. The Role of tb_gen_code:

    • Mission: The tb_gen_code function is like the costume designer. When QEMU encounters a new guest instruction (one not already translated), it creates a fresh TB.
    • TB Generation:
      • The tb_gen_code function:
        • Takes the guest instruction and translates it into host-specific machine code.
        • Creates a new TB containing this translated sequence.
        • Puts the TB in the cache for future use.
      • It's like designing a unique costume for each scene in the play.
  3. TB Size and Hashing:

    • Each TB has a maximum size (usually 512 instructions). Beyond that, QEMU splits it into smaller chunks.
    • The fast lookup (tb_find_fast) uses a hash function value to find the index of the TB in the cache.
    • If the TB is valid (still relevant), it's a quick win—the vCPU can directly jump to that TB.
    • Otherwise, tb_find_slow performs a sequential search to locate the correct TB.
  4. Why Is This Important?

    • Performance! Generating TBs efficiently ensures smooth execution.
    • It's like having a well-stocked wardrobe backstage—ready for any scene.

You imagine it as the costume designer, tailoring unique outfits for each guest instruction! 🎭🌟

Learn more

(1) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(2) qemu_blog/exec.md at main · airbus-seclab/qemu_blog · GitHub.
(3) Features/TCGCodeQuality - QEMU.
(4) Translator Internals — QEMU documentation.

cpu_gen_code()

  1. Translation Blocks (TBs) and TCG:

    • QEMU's dynamic translation backend is called TCG (Tiny Code Generator). It's responsible for converting guest instructions (from the virtual machine) into host-specific machine code.
    • TBs are the building blocks of TCG. Each TB contains a sequence of translated instructions.
  2. The Role of cpu_gen_code:

    • Mission: The cpu_gen_code function is like the master tailor. When QEMU encounters a new guest instruction (one not already translated), it creates a fresh TB.
    • TB Generation:
      • cpu_gen_code:
        • Takes the guest instruction and translates it into host-specific machine code.
        • Creates a new TB containing this translated sequence.
        • Puts the TB in the cache for future use.
      • It's like designing a unique costume for each scene in the play.
  3. TB Size and Hashing:

    • Each TB has a maximum size (usually 512 instructions). Beyond that, QEMU splits it into smaller chunks.
    • The fast lookup (tb_find_fast) uses a hash function value to find the index of the TB in the cache.
    • If the TB is valid (still relevant), it's a quick win—the vCPU can directly jump to that TB.
    • Otherwise, tb_find_slow performs a sequential search to locate the correct TB.
  4. Why Is This Important?

    • Performance! Generating TBs efficiently ensures smooth execution.
    • It's like having a well-stocked wardrobe backstage—ready for any scene.

You imagine it as the master tailor, crafting unique outfits for each guest instruction! 🎭🌟

Learn more

(1) Translator Internals — QEMU documentation.
(2) A deep dive into QEMU: The Tiny Code Generator (TCG), part 1.
(3) Features/TCGCodeQuality - QEMU.
(4) TCG Intermediate Representation — QEMU documentation.

gen_intermediate_code()

  1. Intermediate Code Generation:

    • Before we dive into specifics, let's talk about intermediate code. Imagine it as the script that bridges the gap between the guest (source) language and the host (target) machine language.
    • Intermediate code simplifies the translation process and allows us to keep the analysis portion consistent across different compilers.
  2. The Role of gen_intermediate_code:

    • Mission: The gen_intermediate_code function is like the playwright. When QEMU encounters a new guest instruction (one not already translated), it creates an intermediate representation (IR) of that instruction.
    • IR Generation:
      • gen_intermediate_code:
        • Takes the guest instruction and generates an intermediate representation.
        • This IR captures the essential semantics of the instruction.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, QEMU can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

You imagine it as the playwright, crafting the script that guides the translation process! 🎭🌟

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) Compiler - Intermediate Code Generation - Online Tutorials Library.
(3) Intermediate Code Generation in Compiler Design - GeeksforGeeks.
(4) Chapter Intermediate-Code Generation.
(5) Can I examine Qemu tcg ir code? If so how? - Stack Overflow.

gen_intermediate_code_internal`()

  1. Intermediate Representation (IR) and TCG:

    • Before we dive into specifics, let's talk about intermediate code. Imagine it as the script that bridges the gap between the guest (source) language and the host (target) machine language.
    • Intermediate code simplifies the translation process and allows us to keep the analysis portion consistent across different compilers.
  2. The Role of gen_intermediate_code_internal:

    • Mission: The gen_intermediate_code_internal function is like the playwright. When QEMU encounters a new guest instruction (one not already translated), it creates an intermediate representation (IR) of that instruction.
    • IR Generation:
      • gen_intermediate_code_internal:
        • Takes the guest instruction and generates an intermediate representation.
        • This IR captures the essential semantics of the instruction.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, QEMU can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

You imagine it as the playwright, crafting the script that guides the translation process! 🎭🌟

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) Translator Internals — QEMU documentation.
(3) Target Code Generation - University of Washington.
(4) Can I examine Qemu tcg ir code? If so how? - Stack Overflow.

disas_insn()

  1. Dissecting disas_insn:

    • The disas_insn function is like a backstage spotlight operator. Its job? Illuminate the guest instructions and reveal their secrets.
    • Let's break it down:
  2. Guest Instructions Meet the Spotlight:

    • When QEMU encounters a guest instruction (from the virtual machine), it needs to understand what it does.
    • disas_insn steps in:
      • Takes the guest instruction (in its raw form).
      • Deciphers its semantics, opcode, operands, and any addressing modes.
      • Shines a spotlight on the instruction's details.
  3. Why Is This Important?

    • Debugging and analysis! Imagine you're directing a play:
      • You want to know which actor (instruction) is on stage.
      • What lines (operands) they deliver.
      • How they move (addressing modes).
    • disas_insn provides this insight.
  4. Fun Fact:

    • The -d op option in QEMU enables debug printing of TCG ops (the internal intermediate code).
    • You can also trace the input and output assembly with in_asm and out_asm.
    • The -D file option dumps the tracing to a file for detailed analysis.

You imagine it as the spotlight operator, revealing the intricacies of guest instructions! 🎭🌟

Learn more

(1) Can I examine Qemu tcg ir code? If so how? - Stack Overflow.
(2) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.
(3) Translator Internals — QEMU documentation.

tcg_gen_code()

  1. TCG (Tiny Code Generator) and Intermediate Representation (IR):

    • TCG is the magical engine within QEMU that translates guest instructions (from the virtual machine) into host-specific machine code.
    • But before we get to the final machine code, there's an intermediate step—the creation of an intermediate representation (IR).
  2. The Role of tcg_gen_code:

    • Mission: The tcg_gen_code function is like the conductor. When QEMU encounters a new guest instruction (one not already translated), it orchestrates the creation of an IR.
    • IR Generation:
      • tcg_gen_code:
        • Takes the guest instruction and generates an intermediate representation.
        • This IR captures the essential semantics of the instruction.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, QEMU can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

You imagine it as the conductor, orchestrating the creation of the intermediate dance steps! 🎭🌟

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) Documentation/TCG/frontend-ops - QEMU.
(3) Features/TCG - QEMU.
(4) Features/tcg-multithread - QEMU.

tcg_gen_code_common()

  1. TCG (Tiny Code Generator) and Intermediate Representation (IR):

    • TCG is the magical engine within QEMU that translates guest instructions (from the virtual machine) into host-specific machine code.
    • But before we get to the final machine code, there's an intermediate step—the creation of an intermediate representation (IR).
  2. The Role of tcg_gen_code_common:

    • Mission: The tcg_gen_code_common function is like the conductor. When QEMU encounters a new guest instruction (one not already translated), it orchestrates the creation of an IR.
    • IR Generation:
      • tcg_gen_code_common:
        • Takes the guest instruction and generates an intermediate representation.
        • This IR captures the essential semantics of the instruction.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, QEMU can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

You imagine it as the conductor, orchestrating the creation of the intermediate dance steps! 🎭🌟

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) Multi-threaded TCG — QEMU documentation.
(3) Features/TCG - QEMU.
(4) Documentation/TCG/frontend-ops - QEMU.

tcg_out_op()

  1. TCG and Intermediate Representation (IR) Recap:

    • TCG (Tiny Code Generator) is the heart of QEMU's dynamic translation engine.
    • Before generating final host-specific machine code, TCG operates on an intermediate representation (IR).
  2. The Role of tcg_out_op:

    • Mission: The tcg_out_op function is like the scriptwriter. When TCG translates guest instructions to IR, it generates TCG ops (operations) that represent these instructions.
    • IR Generation:
      • tcg_out_op:
        • Takes a TCG operation (op) and emits the corresponding IR.
        • This IR captures the semantics of the operation.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, TCG can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

You imagine it as the scriptwriter, crafting the instructions for the TCG performance! 🎭🌟

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) A deep dive into QEMU: The Tiny Code Generator (TCG), part 1.
(3) Documentation/TCG/frontend-ops - QEMU.
(4) Documentation/TCG - QEMU.
(5) Can I examine Qemu tcg ir code? If so how? - Stack Overflow.

tcg_out_op_YYY()

  1. TCG and Intermediate Representation (IR) Recap:

    • TCG is the magical engine within QEMU that translates guest instructions (from the virtual machine) into host-specific machine code.
    • But before we get to the final machine code, there's an intermediate step—the creation of an intermediate representation (IR).
  2. The Role of tcg_out_op_YYY:

    • Mission: The tcg_out_op_YYY function is like the scriptwriter. When TCG translates guest instructions to IR, it generates TCG ops (operations) that represent these instructions.
    • IR Generation:
      • tcg_out_op_YYY:
        • Takes a TCG operation (op) and emits the corresponding IR.
        • This IR captures the semantics of the operation.
        • It's like writing down the choreography for a dance routine.
      • The IR serves as a bridge between the guest and host worlds.
  3. Why Is This Important?

    • Efficiency: By working with an intermediate representation, TCG can perform optimizations and analyses before generating the final machine code.
    • It's like having a rehearsal before the actual performance—fine-tuning the steps.

Learn more

(1) TCG Intermediate Representation — QEMU documentation.
(2) Documentation/TCG/frontend-ops - QEMU.
(3) QEMU - Code Flow [ Instruction cache and TCG] - Stack Overflow.

Relative Knowledge

Virtual Machine(VM)

A virtual machine (VM) is an operating system (OS) or application environment installed on software that imitates dedicated hardware. The end user's experience when using a VM is equivalent to that of using dedicated hardware.

How do VMs work?

A VM provides an isolated environment for running its own OS and applications, independent from the underlying host system or other VMs on that host. A VM's OS, commonly referred to as the guest OS, can be the same as or different from the host OS and the OSes of other VMs on the host.

A single computer can host multiple VMs running different OSes and applications without affecting or interfering with each other. Although the VM is still dependent on the host's physical resources, those resources are virtualized and distributed across the VMs and can be reassigned as necessary. This makes it possible to run different environments simultaneously and accommodate fluctuating workloads.

image

  • From the user's perspective, the VM operates much like a bare-metal machine. In most cases, users connecting to a VM are not aware that they are using a virtual environment. Users can configure and update the guest OS and its applications as necessary and install or remove new applications without affecting the host or other VMs. Resources such as CPUs, memory and storage appear much as they do on a physical computer, although users might run into occasional glitches, such as not being able to run an application in a virtual environment.

The role of hypervisors in virtualization

Hosting VMs on a computer requires a specialized type of software called a hypervisor, which manages resources and allocates them to VMs. The hypervisor also schedules and adjusts how resources are distributed based on the configuration of the hypervisor and VMs, including reallocating resources as demands fluctuate.

The hypervisor emulates the computer's CPU, memory, hard disk, network and other hardware resources, creating a pool of resources to allocate to individual VMs according to their specific requirements. The hypervisor can support multiple virtual hardware platforms that are isolated from each other, enabling VMs to run Linux and Windows Server OSes on the same physical host.

Most hypervisors do not require special hardware components. However, the computer that runs the hypervisor must have the resources necessary to support VMs, the hypervisor's operations and the host's own operations.

Hypervisor

A hypervisor is a software that you can use to run multiple virtual machines on a single physical machine. Every virtual machine has its own operating system and applications. The hypervisor allocates the underlying physical computing resources such as CPU and memory to individual virtual machines as required.

image

image

Benefits of hypervisors

image

  • Increased hardware efficiency
    • By providing a physical host system with the ability to run multiple guest operating systems alongside one another, hypervisors enable more of the physical compute resources of the host computer to be used. This increase in utilization vastly expands the capabilities of the hardware and improves efficiency.

image

  • Enhanced portability
    • By isolating VMs from the underlying host hardware, hypervisors make them independent of, as well as invisible to, one another. This in turn makes live migration of virtual machines possible, enabling the move or migration of VMs between different physical machines and remote virtualized servers without stopping them, which enables fail-over and load balancing.

image

  • Improved security
    • Although they run on the same host machine, VMs are logically isolated from one another and therefore have no dependence on other VMs. Any crashes, attacks, or malware on one VM will not affect others, which makes hypervisors extremely secure.

The difference types of hypervisor

image

Developed by IBM in the 1960s to enable partitioning—and more efficient use—of resources within its mainframe computers, hypervisor technology matured and became a key element of the hardware virtualization that was added to PCs and servers. Hypervisors enabled Linux and Unix systems to expand hardware capabilities, improve reliability, and manage costs. Today’s hypervisors are available in two primary types.

Type 1 hypervisors

image

Also called bare-metal hypervisors, run directly on the computer’s hardware, or bare metal, without any operating systems or other underlying software. They require a separate management machine to administer and control the virtual environment. Type 1 hypervisors are highly secure because they have direct access to the physical hardware with nothing in between that could be compromised in an attack. They allow for more resources to be assigned to virtual machines than are available, and since only the necessary resources are consumed by the instance, they are also highly efficient. These two important features make Type 1 hypervisors a central element in enterprise datacenters.

In addition to server operating systems, Type 1 hypervisors can also virtualize desktop operating systems. This is the foundation of virtual desktop infrastructure (VDI), which allows users to access desktop environments such as Windows or Linux that are running inside virtual machines on a central server. Through a connection broker, the hypervisor assigns a virtual desktop from a pool to a single user who accesses it over the network, enabling remote work from any device. Citrix VDI solutions deliver this functionality from both on-premises servers and via the cloud.

Type 2 hypervisor

image

Also called hosted hypervisors, run as an application in an operating system. They require the host operating system to perform their function like any other application, and the guest operating system runs as a process on the host while the hypervisor isolates the guest from the host. Multiple Type 2 hypervisors can be run on top of a single host operating system, and each hypervisor may itself have multiple operating systems. Type 2 hypervisors are simple to set up and enable quick access between applications running on the guest and host operating systems, but are not capable of running the complex workloads that Type 1 hypervisors run.

KVM(Kernel Virtual Machine)

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

Kernel-based Virtual Machine (KVM) is a software feature that you can install on physical Linux machines to create virtual machines. A virtual machine is a software application that acts as an independent computer within another physical computer. It shares resources like CPU cycles, network bandwidth, and memory with the physical machine. KVM is a Linux operating system component that provides native support for virtual machines on Linux. It has been available in Linux distributions since 2007.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

KVM is open source software. The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.

Working of KVM

image

The hardware or software of a hypervisor undergoes optimization of capacity to be distributed among the virtual machines in a balanced manner. KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some operating system-level components—such as a memory manager, process scheduler, input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is implemented as a regular Linux process, scheduled by the standard Linux scheduler, with dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks.

  • Type 1 directly uses the hardware of the system to create virtual machines, whereas Type 2 hypervisor uses the software of the host operating system for a virtual division.

  • Type 2 hypervisors need a medium to make virtual machines. Here comes the Linux kernel, KVM, that makes any Linux OS software suitable for divisions into small units of virtual machines. It converts a Linux system into a type 2 hypervisor to allow the division of cloud hardware to each of these units for them to work independently.

KVM stack

image
Above is the KVM Stack including 4 layers:

  • User-facing tools: These are virtual machine management tools that support KVM. The tools have a graphical interface (like virt-manager) or a command line interface like (virsh).
  • Management layer: This layer is a libvirt library that provides APIs for virtual machine management tools or hypervisors to interact with KVM to perform virtualization resource management operations, because KVM itself does not have emulation capabilities. and manage such resources.

image

  • Virtual machine: These are virtual machines created by users. Normally, if tools like virsh or virtual-manager are not used, KVM will be used in conjunction with another hypervisor, typically QEMU.
  • Kernel support: KVM itself, providing a kernel module for virtualization infrastructure (kvm.ko) and a special kernel module supporting VT-x or AMD-V processors (kvm-intel.ko or kvm-amd.ko)

KVM features

Security

  • In KVM architecture, virtual machines are considered regular Linux processes, thereby taking advantage of the security model of Linux systems such as SELinux, providing resource isolation and control.
  • Besides, there is also SVirt project - a project that provides MAC (Mandatory Access Control - Mandatory Access Control) security solutions integrated with virtualization systems using SELinux to provide an infrastructure that allows Administrators define policies to isolate virtual machines. That is, SVirt will ensure that the virtual machine's resources cannot be accessed by any other processes; This can also be changed by the system administrator to set special permissions, grouping virtual machines together to share common resources.

Memory Management

  • KVM inherits Linux's powerful memory management features. The virtual machine's memory is stored in the same memory area as other Linux processes and can be swapped. KVM supports NUMA (Non-Uniform Memory Access - memory designed for multiprocessor systems) allowing effective use of large memory areas.
  • KVM supports the latest virtualization features from CPU vendors such as Microsoft's EPT (Extended Page Table), AMD's Rapid Virtualization Indexing (RVI) to reduce CPU usage and provide higher throughput.
  • KVM also supports Memory page sharing using the kernel feature Kernel Same-page Merging (KSM).

Storage

  • KVM is capable of using any storage solution supported by Linux to store Images of virtual machines, including local drives such as IDE, SCSI and SATA, Network Attached Storage (NAS) including NFS and SAMBA/CIFS, or SAN via iSCSI and Fiber Channel protocols.
  • KVM takes advantage of reliable storage systems from leading providers in the Storage industry.
  • KVM also supports images of virtual machines on shared file systems such as GFS2 allowing images to be shared between multiple hosts or shared between logical drives.

Live migration

  • KVM supports live migration, providing the ability to move running virtual machines between physical hosts without interrupting service. Live migration capabilities are transparent to the user, virtual machines remain powered on, network connectivity remains secure, and user applications continue while the virtual machine is migrated to a physical host. new. KVM also allows the current state of the virtual machine to be saved to allow that state to be stored and restored the next time it is used.

Performance and scalability

  • KVM inherits the performance and scalability of Linux, supporting virtual machines with 16 virtual CPUs, 256GB RAM and host systems up to 256 cores and over 1TB RAM.

References

Virtual Machine - VM
What is a hypervisor
What is a hypervisor?
Are hardware drivers needed to be installed on the management OS of a type 1 hypervisor?
Linux-kvm ~ main page
What is KVM
What is Kernel-based Virtual Machines (KVM)?
KVM basic
KVM intro
QEMU
KVM - QEMU
Qemu/KVM Virtual Machines
Understanding Hypervisors: Exploring Type-1 vs Type-2 and Full vs Para Virtualization
Management Tools
Difference between KVM and QEMU
How do KVM and QEMU work together?