# Report Final- Try LibreMesh without having a router
In the first blog, some problems that users were running into when trying LibreMesh were mentioned. In short, they consisted of:
1. When you wanted to shut down a node that had been run previously, you left an interface up on the host.
2. When you wanted to start a node to change internet access, this was not possible since the port required by the qemu_dev_start script was always occupied by the systemd_resolved process.
3. When the node cloud was brought up, it had called ifconfig, and since this functionality is not installed in versions later than Ubuntu 18.04, the user could not run the node cloud.
To solve these problems, the strategy that was chosen was to modify the files already created, also creating detailed documentation and new functionalities to facilitate virtualization for anyone who wanted to try LibreMesh.
## Milestones achieved
1. **Improvement of the environment**: all the problems mentioned meant obstacles when testing LibreMesh, thus causing users to decline in the knowledge of the tools that LibreMesh proposes. So, identifying and correcting them would offer any user a cleaner testing environment, free from any obstacles that depend on the LibreMesh code.
**1.1 Close interface on host**
The solution to this was to modify the qemu_dev_stop script. The interface that was raised is called lime_br0, so the line was placed inside this file:
lime_br0 ip link
In this way, the interface on the host will stop the execution of the node.
**1. 2. Collision in port**
In this case, the solution was to change the port occupied by the DHCP service for the wan interface:
dnsmasq -F 172.99.0.100,172.99.0.100 --dhcp-option=3,172.99.0.1 -i "$WAN_IFC" –dhcp- authorized --log-dhcp --port=5353 --bind-dynamic
**1.3. Calls to ifconfig**
In this case the files that were modified were two:
- linux_bridge.py
- linux_bridge_port.py
which are located inside the lime-packages/tools/ansible/modules directory.
Where there were ifconfig calls, ip was used.
In linux_bridge.py the changes were the following:
- def brctl (self, cmd) :
- return self.module.run_command (['brctl'] + cmd)
- def ifconfig (self, cmd) :
- return self.module.run_command (['ifconfig'] + cmd)
+ def ip(self, cmd) :
+ return self.module.run_command (['ip'] + cmd)
- (rc, out, err) = self.brctl (['addbr', self.bridge])
+ (rc, out, err) = self.ip (['link', 'add', 'name', self.bridge, 'type', 'bridge'])
- self.ifconfig ([self.bridge, 'up'])
+ self.ip(['link','set','up', self.bridge])
- self.ifconfig ([self.bridge, 'down'])
- (rc, out, err) = self.brctl (['delbr', self.bridge])
+ self.ip(['link','set', 'down', self.bridge])
+ (rc, out, err) = self.ip (['link', 'del', self.bridge])
In linux_bridge_port.py the changes were:
- def brctl (self, cmd) :
- return self.module.run_command (['brctl'] + cmd)
+ def ip(self, cmd) :
+ return self.module.run_command (['ip'] + cmd)
- (rc, out, err) = self.brctl (['addif', self.bridge, self.port])
+ (rc, out, err) = self.ip (['link', 'set', self.port,'master',self.bridge])
- (rc, out, err) = self.brctl (['delif', self.bridge, self.port])
+ (rc, out, err) = self.ip (['link', 'del', self.port,'dev',self.bridge])
Also, in the qemu_cloud_start.yml file, the message for handling the network of nodes by clusterssh was modified:
- clusterssh {{ linklocals | join(' ') }}
+ clusterssh -o "-o "StrictHostKeyChecking=no" -o HostKeyAlgorithms=+ssh-rsa" root@{{ linklocals | join(' ') }}
**2. New functionality**
LibreMesh already offered the option to raise a node. This was done with the qemu_dev_start script, stopping in the repository and executing the following from the console:
sudo ./tools/qemu_dev_start ~/path/to/rootfs.tar.gz ~/librerouteros/path/to/ramfs.bzImage
However, this required the user to have previously installed the necessary packages.
Taking this into account, and in order to relieve the user of responsibility for downloading the dependencies, the only_one_node.yml playbook was developed, which fulfills the same function as qemu_dev_start (in fact, it makes a call to said script) but with the execution of the It is achieved that the user does not have to install these dependencies. They are installed in another playbook: cloude_and_node_packages.yml, which only_one_node.yml and qemu_cloud_start.yml import later.
The playbook only_one_node.yml can be passed the same parameters that were passed to qemu_dev_start using the --extra-vars parameter provided by ansible playbook.
An example to start a node giving it access to the internet (using as an example that wlo1 is called the network interface of the host) using only_one_node.yml is the following:
sudo ansible-playbook only_one_node.yml --extra-vars "param='--enable-wan wlo1'"
As mentioned at the beginning, all the necessary steps to virtualize and better understand the functionalities that LibreMesh has so that any user can have a more complete and guided experience have been documented here https://hackmd.io/Ga3Uq-73SQmITqq1uKTquA?both .
Conclusion:
The solutions that were given to the different problems and everything documented to be able to use LibreMesh greatly contribute to anyone who doesn't even know what LibreMesh is and who wants to try it for the first time.
When a person is not knowledgeable about something and there are multiple problems they run into, and no help to clarify the picture, that person is more likely to stop spending effort on domain knowledge and look for other similar options, the which you can understand and access more easily.
That is why the fact of not encountering the problems raised and of having documentation that guides in this testing process means that users do not decline so easily and want to use mesh networks, which were the objectives pursued from the beginning.
The development of this whole project was very challenging for me. From the use of tools that I did not know like ansible playbook, network management with which I was not so familiar, the use of virtual machines, the familiarization with the LibreMesh community to the management of a project itself, there were many things that I I learned and what I take away from this project.