# Run LibreMesh in a virtual machine In this repository, you will find scripts that allow you to raise libremesh instances on virtual machines using QEMU. This way you can test libremesh without the need to count with routers or to flash the libremesh firmware on them, which greatly speeds up the libremesh development process. ## What are the limitations of virtualization? This image is not a perfect firmware image, it does not have wifi for example but ethernet network LAN and WAN is supported. All the files inside the packages `files/` can be copied into the rootfs, overwriting a precooked image that is a full LibreMesh x86_64 image. So don't expect that everything runs exactly as in a wireless router but most things will perform as expected: * initialization scripts: uci-defaults, init.d, etc * lime-config * ubus / rpcd * lime-app ICMPv4 does NOT work between qemu nodes, so ping (v4) will not work as expected. Everything else (including ICMPv6 i.e. ping6) does work as expected, however. ## What do you need to run libremesh in a virtual? In order to run libremesh in QEMU you need two files: - generic-rootfs.tar.gz - ramfs.bzImage Which can be generated by compiling libremesh from the `buildroot` selecting x86 as the target and the option to generate ramgs.bzImage. LibreRouterOS, a LibreMesh flavor maintained by the LibreRouter team, distributes these images in each of its releases. You can find them at: https://gitlab.com/librerouter/librerouteros/-/releases Example: https://repo.librerouter.org/lros/releases/1.5/targets/x86/64/ ## How to virtualize a libremesh node? 1. Clone the lime-packages repository ``` git clone git@github.com:libremesh/lime-packages.git ``` 2. Install the following packages ``` sudo apt install dnsmasq iptables qemu-system-x86 ``` 3. Run the script ./tools/qemu_dev_start ``` cd lime-packages sudo ./tools/qemu_dev_start /path/to/rootfs.tar.gz /path/to/ramfs.bzImage ``` 4. To stop the virtual and clean the environment created on the host for its operation: ``` sudo ./tools/qemu_dev_stop ``` In addition, up to 100 qemu nodes can be configured. This can be done using the --node-id N parameter. In this example, the node id is 1 and the host network interface is wlo1: ``` sudo ./tools/qemu_dev_start --node-id 1 --enable-wan wlo1 /path/to/rootfs.tar.gz /path/to/ramfs.bzImage ``` All LAN interfaces of the node are tied together. You can use --enable-wan on just one of the nodes to share its internet connection with the network ## How do I go about giving it access to the internet? If you want to give the node access to the internet, qemu_dev_start argues “enable-wan” to which you must pass the name of the network interface of your host through which you have an internet connection as a parameter. In this example the wifi interface name is wlo1. The command is used as follows: ``` sudo ./tools/qemu-dev-start --enable-wan wlo1 path/to/rootfs.tar.gz /path/to/ramfs.bzImage ``` ## How do I stomp the image with my libremesh working directory? If you've been working on libremesh mods and want to test them on a virtual node, you can use the "libremesh-workdir" argument, and step into the root file system without the need to rebuild the images. The command is used as shown next: ``` sudo ./tools/qemu-dev-start --libremesh-workdir . /path/to/rootfs.tar.gz /path/to/ramfs.bzImage ``` ## How do I virtualize a multi-node libremesh network? 1. Install ansible ``` sudo apt install ansible ``` or by following the official ansible [installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html). 2. Copy the `rootfs.tar.gz` and `ramfs.bzImage` images to ./tools/ansible/files ``` cd lime-packages cp /path/to/rootfs.tar.gz ./tools/ansible/files/generic-rootfs.tar.gz cp /path/to/ramfs.bzImage ./tools/ansible/files/ramfs.bzImage ``` 3. Run the playbook qemu_dev_start.yml ``` cd lime-packages/tools/ansible sudo ansible-playbook qemu_cloud_start.yml ``` 4. To stop the virtual ones and clean the environment created on the host for its operation: ``` sudo ansible-playbook qemu_cloud_stop.yml ``` By default, the network topology is made up of 12 nodes distributed in four different batman/L2 clouds (A, B, C, D) and interconnected between them. To modify this topology, you can edit the `./tools/ansible/hosts.yml` file. You will then be able to access each of the cloud nodes via clusterssh as shown in the playbook output. ## How do I specify in which node of the mesh the internet connection would be? Edit the `hosts.yml` file and add the `enable_wan` variable with the value of the network interface of your host through which you have an internet connection. Example: ``` all: children: cloudA: vars: eth0: lm_cloudA cloud: A hosts: hostA.cloudA.test: enable_wan: wlo1 (...) ``` ## "Cómo hago para especificar a qué nodo de la mesh estaría conectado mi host?" El objetivo es simular que uno de los nodos que se levantan de la nube está conectado a nuestro host por cable ethernet. Para ello: 1. Creamos un bridge en el host, en este caso le llamamos "bridge_lan": `sudo ip link add name bridge_lan type bridge` 2. Levantamos el bridge: `sudo ip link set bridge_lan up` 3. Asignamos al bridge una ip en el rango de los nodos que están en la nube. Tener en cuenta que los nodos se encuentran en el rango 10.235.0.0/16: `sudo ip addr add 10.235.192.2/16 dev bridge_lan` 4. Agrego al bridge cualquiera de las interfaces lan de algun nodo de la nube, en este caso la interfaz que se agrega es lm_A_hostA_0: `sudo ip link set lm_A_hostA_0 master bridge_lan`