libvirt

At the heart of libvirt is a set of APIs for virtualization management and a server-side realization of the APIs libvirtd. Officially released utilities by the libvirt project include

  1. two other server-side daemons virtlockd, virtlogd that work alongside the main daemon libvirtd,
  2. a daemon management client virt-admin,
  3. a general virtualization management client virsh (this is the client-side realization of the libvirt APIs),
  4. an experimental QEMU-specific virtualization management client virt-qemu-run,
  5. an LXC shell virt-login-shell (yes, libvirt can manage LXC instances),
  6. a sanlock cleanup tool virt-sanlock-cleanup,
  7. validation tools virt-host-validate, virt-pki-validate, virt-xml-validate, and
  8. key codes.

Libvirt was originally made for XEN[1] but later became a common manager of several backends including KVM and LXC.

Hypervisors: type I or type II ?

The ambiguity between type I and type II hypervisors are exacerbating, unlike that of type I and type II diabetes. There are arguments surrounding the classification of KVM and Hyper-V as type I or type II. Unless you are a VMM (virtual machine manager, another name for hypervisor) architect, the distinction remains mostly irrelevant, but keep these points in mind:

  1. VMware products consists of hypervisors of both types.
  2. The following are unanimously agreed as being of type II:
    • Oracle VM VirtualBox
    • VMware Workstation Pro / VMware Fusion
    • Parallels Desktop
  3. Hyper-V is fundamental to Windows whereas KVM is fundamental to Linux.
  4. Type I does not imply speed.

Landscape: KVM, QEMU, libvirt, virt-manager

KVM is the Linux-native hypervisor. It exposes a set of low-level APIs in the form of syscalls, more like hypercalls, actually.

QEMU can work as a standalone type II hypervisor, but normally QEMU translates the KVM APIs to CLI and introduces a higher level abstraction at the same time. The complete lifecycle of a VM is more conveniently managed with QEMU. QEMU also provides tools for the creation and inspection of disk images.

Libvirt provides yet another layer of abstraction on top of QEMU to improve usability. Users can only interact with QEMU by means of command line arguments, with the exception of using the interactive shell QEMU monitor after the VM is created. This makes QEMU very verbose and urges users to craft lots of shell scripts. On the other hand, libvirt takes a declarative approach all resources are defined as XML files, and libvirtd realize it for you, be it VM, storage, or network. Similar to QEMU monitor, virsh is an interactive shell communicating with libvirtd but with reduced granularity and increased functionality.

The virt-manager project accompanies libvirt with the virt-install CLI and the virt-manager GUI. Libvirt has no graphical client, and crafting XML configuration files is no less work than typing command line arguments as in the case of QEMU; note that using static configuration files is a different paradigm that fares better in stability and management.

Virt-install is a simple CLI that creates libvirt XML configuration file templates. In a sense, virt-install is similar to the QEMU CLI only that virt-install deals with VM creation solely, adheres to the libvirt semantics, and provides sane defaults.

Virt-manager is a graphical combination of virt-install and virsh bundled with a VNC/SPICE client. However, not all features of virsh is implemented by virt-manager, a notable one being device hot-(un)plugging for the time being.







hierarchy


cluster_0

project virt-manager


cluster_1

project libvirt



virt-install

virt-install



libvirtd

libvirtd



virt-install->libvirtd


 XDR



virt-manager

virt-manager



virt-manager->libvirtd


 XDR



virsh

virsh



virsh->libvirtd


 XDR



QEMU

QEMU



libvirtd->QEMU


 QMP



KVM

KVM



QEMU->KVM


 ioctl



Connecting to libvirtd

Official doc.

Local connection

In the simplest case, a libvirtd instance listens to a UNIX domain socket which is protected by uid/gid. Users/groups with read/write permissions to the socket of a libvirtd instance have control over the instance. A libvirtd instance can be created by any user, which results in varying euid/egid, by issuing:

libvirtd -d

The libvirtd instance created by a less-priviledged user session instance is, expectedly, forbidden to carry out certain actions, such as using LVM pools or attaching to linux bridges. Fortunately, granular permissions can be granted via PolKit. For convenience, a user is usually added to the libvirt group, allowing the user to read from and write to the socket associated to the system libvirtd instance which has root priviledge.

sudo usermod -a -G libvirt <user>

Virsh, virt-install, and virt-manager all incorporates the concept of connection URI; as an example:

virsh --connect qemu:///system ...

qemu:///system is the connection URI for the system libvirtd instance with root priviledge to which users of the libvirt group can connect. By default the connection URI is qemu:///session, meaning the following two lines are identical:

virsh --connect qemu:///session ...
virsh ...

Similarly, for virsh-install we have:

virt-install --connect qemu:///system ...
virt-install --connect qemu:///session ...
virt-install ... # same as the previous line

Remote connection

Many options exist: unencrypted TCP socket, TLS, libssh, libssh2, OpenSSH, etc. We find the OpenSSH approach most convenient.

virsh --connect qemu://<ssh host>/system ...

This effectively connects to <ssh host> via OpenSSH (.ssh/config is respected) and proxies the system libvirtd socket with netcat. Note that the netcat program on the server must support the -U option. In this case, the user that <ssh host> refers to must have access to system libvirtd socket.

To connect to a session libvirtd instance:

virsh --connect qemu://<ssh host>/session?socket=<path to socket> ...

The user associated to <ssh host> must have already started the session libvirtd instance, and the corresponding socket must be indicated explicitly in the connection URI. The path to the socket could be queried with:

libvirtd -h

Connection aliases

Connection URIs could get very long. Much like ssh_config, users can define aliases in a configuration file. The configuration file is located at /etc/libvirt/libvirt.conf for the root user or $XDG_CONFIG_HOME/libvirt/libvirt.conf for any other user. The syntax is of the form:

uri_aliases = [
    "<alias>=<connection URI>",
    "system=qemu+ssh://<ssh host>/system",
    "session=qemu://<ssh host>/session?socket=<path to socket>",
]

In the future, the following two lines will be identical:

virsh --connect <alias> ...
virsh --connect <connection URI> ...

The configuration file also affects virt-install and virt-manager.

Default connection

The default connection seems to be qemu:///session, yet it can be overridden according the following rules.

  1. If the environment variable LIBVIRT_DEFAULT_URI is set, it will take precedence before anything else.
  2. Second in priority is the field uri_default is in the configuration file.

Both LIBVIRT_DEFAULT_URI and uri_default accept an alias. Suppose the configuration file contains:

uri_aliases = [
    "<alias>=<connection URI>",
]
uri_default = <alias>

These lines are the same:

virsh ...
LIBVIRT_DEFAULT_URI=<alias> virsh ...
virsh --connect <connection URI> ...

Once again, the configuration file affects virt-install and virt-manager as well.

Using virsh

The general usage of virsh looks like:

virsh [--connection <connection URI>] <subcommand> ...

If virsh is called without a subcommand, the user is presented with an interactive shell:

$ virsh [--connection <connection URI>]
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # 

The interactive shell liberates the user from typing the virsh command and the connection URI.

Examples are illustrated with the interactive shell connected to a system libvirtd instance.

Libvirt calls VMs domains.

Listing domains

List active domains:

virsh # list

List all domains:

virsh # list --all

List all domains marked autostart:

virsh # list --all --autostart

Note that marking a domain as autostart only makes sense if the domain is managed by a system libvirtd instance, as session libvirtd instances will not autostart on boot, yielding it impossible to autostart said domain.

Creation, modification, and destruction of domains

Suppose that there is a well-defined domain XML /path/to/<vm>.xml.

To associate it with the libvirtd instance:

virsh # define /path/to/<vm>.xml

The file /path/to/<domain>.xml will be copied to some predefined location, and subsequent modifications to the domain <vm> through libvirt will persist in that copy.

Now, if you run virsh # list --all, you will see the domain <vm> whose state is shut-off. In the shut-off state, the XML file defining the domain <vm> can be modified manually by

virsh # edit <vm>

This will open the XML file with EDITOR. On save, the modified XML file will be verified, and you will prompted on verification failure.

To dissassociate the domain with the libvirtd instance:

virsh # undefine <vm>

This will fail if the domain is still running. You can shut it down first or forcefully undefine it with

virsh # undefine <vm> --force

To start a domain:

virsh # start <domain>

To gracefully shutdown a domain:

virsh # shutdown <domain>

This does not always work, as the VM might not be paravirtualized (knowing that it is VM and contains an agent to communicate non-trivially with the hypervisor). However, we can always destroy the running instance as if it lost power:

virsh # destroy <domain>

The same goes for restarting the domain. There is the graceful way and the other that is not:

virsh # restart <domain>
virsh # reset <domain>

Attaching and detaching devices to running domains

Virt-manager seems to be missing these features.

Installation: Fedora

  • Server-side: KVM, QEMU, libvirtd
  • Client-side: virsh, virt-install, virt-manager

In fact, QEMU accepts remote connections via QMP. However, we want libvirtd to manage other VM-related resources on the machine that QEMU resides. This implies co-locating libvirtd with QEMU.

If you want to install everything on the server, Fedora provides a handy package group:

sudo dnf install @virtualization

To also pull the optional packages, including guestfs and virt-top:

sudo dnf group install --with-optional virtualization

To inspect the detail of the virtualization package group (before or after installation):

dnf groupinfo virtualization

The following describes minimal separate installs.

Server-side

sudo dnf install qemu-kvm libvirt-daemon

To be able to run osinfo-query os, install the libosinfo package without weak dependencies which are fonts?!

sudo dnf install --setopt=install_weak_deps=False libosinfo

Set up the correct SELinux label:

chcon -t svirt_home_t test.ign

To be able to connect to the system mode libvirt daemon, add your user to the libvirt group:

sudo usermod -a -G libvirt <user>

Client-side

sudo dnf install libvirt-bash-completion libvirt-client virt-manager

To connect to the system mode libvirt daemon by default, add the following to ~/.config/libvirt/libvirt.conf.

uri_default = "qemu:///system"

Check libvirt networks and storage pools:

virsh net-list --all
virsh pool-list --all

In case that the pool list is empty, create one named default from the existing directory /var/lib/libvirt/images:

virsh pool-define-as default dir --target /var/lib/libvirt/images
virsh pool-build default
virsh pool-autostart default
virsh pool-start default

The directory /var/lib/libvirt/images was automatically created as the libvirt package was installed. This directory has the correct SELinux labels. We only have to grant the libvirt group read-write access to the directory for flexible installation of image files:

sudo chown root:libvirt /var/lib/libvirt/images
sudo chmod g+rw /var/lib/libvirt/images

Users experienced with virt-manager might find it perplexing that there is no default storage pool even if virt-manager is installed. According to this, the default storage pool is created upon first execution of virt-manager.

For example, to add the CentOS 8 iso to the default pool:

curl -L \
    -o /var/lib/libvirt/images/centos8-netboot.iso \
    http://mirror01.idc.hinet.net/centos/8.2.2004/isos/x86_64/CentOS-8.2.2004-x86_64-boot.iso

List the volumes in the default pool:

virsh vol-list --pool default

Why is it empty? Turns out that the pool has to be refreshed:

virsh pool-refresh default

Why is it designed this way? Libvirtd is designed to handle all storage related upload/download through the libvirt APIs which excludes manually moving files in/out the directories underlying storage pools.

Why save iso files in a storage pool? It allows virt-install to reference an iso file with vol=default/<iso> instead of path=/path/to/iso. Not only is the latter more verbose, one might have to deal with additional permissions and SELinux settings.

Let's define a domain.

virt-install \
    --name cent \
    --memory 4096 \
    --vcpus 2 \
    --os-variant centos8 \
    --boot hd,cdrom,useserial=on,menu=on \
    --disk size=10,bus=virtio \
    --disk vol=default/centos8-netboot.iso,device=cdrom \
    --network network=default,model=virtio \
    --graphics none \
    --noautoconsole \
    --noreboot

You should find the domain named cent by:

virsh list --all

We will now boot it will a serial console:

screen
virsh start cent --console

Press TAB and add:

 console=ttyS0 noshell