# Installation Manual This installation manual will guide you on how to setup the builder and the runner in order to deploy, build and test CUBE. The **runner** is responsible for running all the containers of the services. This for both the dev and the release environment. The **builder** will build all incoming commits to the dev and the master branch of each repository as well as run the tests associated with it. It will also create and push docker images to docker.io, which can be pulled and deployed by the runner. ![](https://i.imgur.com/WaEPEex.jpg) ## Runner setup First, a general setup of the server has to be executed. We start of by increasing the disk partition size to its maximum: ``` sudo bash -e -c '. <(wget -O - -q https://www.wall2.ilabt.iminds.be/expand-root-disk.sh)' ``` Next, update all packages using: ``` sudo apt update sudo apt dist-upgrade sudo apt autoremove -y ``` Make sure that the sda1 is indeed the partition used by the server by running: ``` sudo fdisk -l ``` If this is in order, reinstall grub boot loader for safety: ``` sudo grub-install --force /dev/sda1 ``` In order to secure the server a bit more, the ubuntu firewall is enabled. ``` sudo ufw enable ``` The runner uses docker-compose in order to run the containers of the dev and release environment. Because of this, we have to install docker and docker-compose. ``` sudo apt install docker.io sudo apt install docker-compose ``` To serve services at pretty urls, nginx is used to link the urls with the endpoints of the containers. We have to install nginx. ``` sudo apt install nginx ``` Certbot was used to ensure that ssl could be used when accessing the different services. We have to install snapd in order to install certbot. ``` sudo apt install snapd sudo snap install core; sudo snap refresh core sudo snap install --classic certbot sudo ln -s /snap/bin/certbot /usr/bin/certbot ``` ### Keycloak #### Installation Keycloak provides authentication to all of our services. The docker-compose file to set it up can be found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/keycloak/docker-compose.yaml). Certain environment variables (denoted by ${...}) such as the Keycloak postgres password need to be given in an `.env` file in the same directory as the docker-compose. This docker-compose also mounts the `themes` folder which can be found [here](https://github.ugent.be/CUBE/project-documentation/tree/master/server/runner/keycloak/themes). This folder contains a custom theme for the CUBE login and register page. For the docker-compose to work an external volume with the name `postgres_data_keycloak` will also need to be created. Create the volume with the following command: `docker volume create postgres_data_keycloak`. #### Setup After Keycloak and Nginx have been setup, navigate to `[BASE URL]/auth/admin/master/console/`. Log in with the credentials provided in the `.env` file. Afterwards create a new realm named `CUBE` by clicking on `Select realm -> Add realm`. Enter the fields as follows: ![](https://i.imgur.com/RZSIwnA.png) Next, click on import and import the following [realm file](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/keycloak/realm-export.json). Finally, click on `Create` and the realm should be properly configured. If the services are hosted on a different root URL, change the Root URL in the respective clients for the services. ### Grafana Grafana is used to create and display dashboards for our metrics collected in the metrics-service. The docker-compose file to set it up can be found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/grafana/docker-compose.yaml). For the docker-compose to work an external volume with the name `grafana_data_release` will also need to be created. Create the volume with the following command: `docker volume create grafana_data_release`. Similarly to the Keycloak setup, the username and password need to be given via the environment variables `${GRAFANA_USER}` and `${GRAFANA_PASSWORD}`. To run the docker-compose simply run `docker-compose up`. The running Grafana instance then needs to be configured with the InfluxDB datasource. Log in to the Grafana instance at `[BASE_URL]/dashboard/datasources/`. Next, click on the cog in the menu to the left: ![](https://i.imgur.com/kS6QeVL.png) and go to `Data sources`. Click on `Add data source` and then `InfluxDB`. Fill in the following configuration: ![](https://i.imgur.com/iGCJfAf.png) Fill in the token configured in the docker-compose for the InfluxDB in the token field. Save the configuration by clicking `Save & Test`. A success message should pop up saying `3 buckets found`. The final step that is left is to import the existing dashboard: In the menu click on the `+` ![](https://i.imgur.com/evMh1Q0.png) and click on `Import`. Under the `Import via panel JSON`, paste the content of the file found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/grafana/config/dashboard.json). This should complete the Grafana configuration. ### Nginx In order to link the urls of the different services to their containers, nginx was used. This was previously installed but now the configuration has to be provided. We use a reverse proxy for this. We also only have access to one ip address. This means that in order to access services such as *Jenkins* and *SonarQube* of the builder via the ip-adress, we have to configure nginx to access these aswell. A simple overview of how requests are forwarded is shown below. ![](https://i.imgur.com/OoQ1rE1.jpg) We start by unlinking the default site of nginx. ``` sudo unlink /etc/nginx/sites-enabled/default ``` We create a new *site* called reverse-proxy. The contents of this file can be found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/nginx/reverse-proxy.conf). ``` cd /etc/nginx/sites-available/ sudo vi reverse-proxy.conf ``` We link the newly created available site, using a symbolic link, to the enabled sites in order to enable it. ``` sudo ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf ``` We restart nginx to apply our new changes: ``` sudo service nginx restart ``` ### Automatic docker image deploy In order to redeploy images that were pushed to docker.io, we use a service that listens to docker.io webhooks. When a new image is pushed to this, a script that redeploys the docker-compose is run. These scripts for dev and release can be found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/restart-dev.sh) and [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/restart-release.sh) respectively. We use a git repository that already exists in order to listen to the webhooks but have to modify it slightly because both the dev and release images are pushed to the same docker repository. Afterall, we dont want to restart the release environment when new images are pushed to dev. We start by installing python 3, cloning the repository and installing the required modules. We also update flask to get rid of a *is_xhr* error. ``` sudo apt install python3-venv git clone https://github.com/Praisebetoscience/dockerhub-webhook.git pip3 install -r requirements.txt pip3 install -U flask ``` In the *run.py* file change `app.run(debug=True)` to `app.run(host='0.0.0.0', debug=True)`. Replace the contents of the file *handler.py* with [this](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/handler.py). Find the config.py file and make sure that the scripts are in the directory above the directory of the config.py file. Add following contents to the file: ``` HOOKS = {'release': '../restart-release.sh', 'dev': '../restart-dev.sh', 'none': '../none.sh'} ``` Note that the none.sh is just an empty bash script. The next step is to deploy the service on the server so that it is always up and running. First we have to install gunicorn using `sudo apt install gunicorn`. Next, we create a new service file using `sudo vi /etc/systemd/system/imagelistener.service`. And with following contents: ``` [Unit] Description=Gunicorn daemon for imagelistener After=network.target [Service] User=vmnaesse Group=www-data WorkingDirectory=/users/vmnaesse/docker/dockerhub-webhook ExecStart=/users/vmnaesse/docker/dockerhub-webhook/venv/bin/gunicorn dockerhook:app -w 1 -b 0.0.0.0:5000 --timeout 120 [Install] WantedBy=multi-user.target ``` Now, the imagelistener service can be started, stopped and status checked using `sudo systemctl restart|start|stop imagelistener`. Finally, docker.io has to know which endpoints to call when a new image is pushed to dockerhub. For this simply add <ip>:5000 to the webhooks section of the dockerhub repository. ### Running the Services In order to run services, docker-compose is used. A docker-compose for both dev and release is provide [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/docker-compose-dev.yaml) and [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/docker-compose-release.yaml) respectively. These containers can be started by running `docker-compose -f docker-compose-release.yaml -p CUBE-release up -d` or `docker-compose -f docker-compose-dev.yaml -p CUBE-dev up -d` or more easily, by running the scripts [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/restart-dev.sh) and [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/runner/docker/restart-release.sh) used by the earlier defined imagelistener. #### Kafka Some services require kafka in order to run correctly. For this, a separate kafka and zookeeper container connected to both the dev and release docker-networks should be created and started. ``` sudo docker create --network cubedev_default --name zookeeper -p 2181:2181 wurstmeister/zookeeper:latest sudo docker network connect cuberelease_default zookeeper sudo docker create --network cubedev_default --name kafka -p 9092:9092 --env KAFKA_ADVERTISED_HOST_NAME=kafka --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 --env KAFKA_AUTO_CREATE_TOPICS_ENABLE=true --env KAFKA_CREATE_TOPICS=model-dev:1:1,media-dev:1:1,model-rel:1:1,media-rel:1:1 --env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 wurstmeister/kafka:2.13-2.7.0 sudo docker network connect cuberelease_default kafka sudo docker start zookeeper sudo docker start kafka ``` ## Builder setup First, a general setup of the server has to be executed. We start of by increasing the disk partition size to its maximum: ``` sudo bash -e -c '. <(wget -O - -q https://www.wall2.ilabt.iminds.be/expand-root-disk.sh)' ``` Next, update all packages using: ``` sudo apt update sudo apt dist-upgrade sudo apt autoremove -y ``` Make sure that the sda1 is indeed the partition used by the server by running: ``` sudo fdisk -l ``` If this is in order, reinstall grub boot loader for safety: ``` sudo grub-install --force /dev/sda1 ``` In order to secure the server a bit more, the ubuntu firewall is enabled. ``` sudo ufw enable ``` We also want this node to be publicly available for ipv4 so the runner can access this server: ``` wget -O - -nv https://www.wall2.ilabt.iminds.be/enable-nat.sh | sudo bash ``` Next, we install Java 11: ``` sudo apt install openjdk-11-jdk ``` For Jenkins to run containers successfully, we have to install docker: ``` sudo apt install docker.io ``` ### Jenkins In order to automatically build, test, create images of each repository and start code analysis with SonarQube, Jenkins was used. Jenkins listens to certain hooks of the gihub repositories and reacts based on what type of github operation was executed. A commit to the **dev-branch**, for example, will result in a build, tests, a SonarQube analysis and a push of a new docker dev image. Equally, a push to the **master-branch** will do the same but will create release image instead of a dev image on dockerhub. We start off by installing Jenkins: ``` wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add - sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' sudo apt-get update sudo apt-get install jenkins ``` And add Jenkins to the docker group using `sudo usermod -a -G docker jenkins`. Next, download the compressed tar found [here](https://drive.google.com/file/d/1kcjh0Eu135lZCWDyELaR6PPdy4Kp0PFI/view?usp=sharing). Copy the content from the `jenkins` directory in this tar to the clean install found at `/var/lib/jenkins`. Jenkins should now be correctly configured. You can log in to Jenkins using the following credentials: ``` vmnaesse Er4wVu0k1MnbXOCc ``` ### SonarQube To install SonarQube first some environment variables need to be set up. Open the necessary file in the nano editor with the command: `sudo nano /etc/sysctl.conf` Scroll to the bottom of the file and paste the following: ``` vm.max_map_count=262144 fs.file-max=65536 ulimit -n 65536 ulimit -u 4096 ``` Next, open the limits.conf file with the command: `sudo nano /etc/security/limits.conf` Scroll to the bottom of this file and paste the following: ``` sonarqube - nofile 65536 sonarqube - nproc 4096 ``` In order for these changes to take effect, reboot the system with the command: `sudo reboot` SonarQube also requires a database to work. We opted for a PostgreSQL server. Execute the commands below to install the PostgreSQL server with a Sonar user: ``` wget -q https://www.postgresql.org/media/keys/ACCC4CF8.asc -O - | sudo apt-key add - sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ lsb_release -cs-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' sudo apt-get update sudo apt install postgresql postgresql-contrib -y sudo systemctl start postgresql sudo systemctl enable postgresql sudo passwd postgres su - postgres createuser sonarqube psql ALTER USER sonarqube WITH ENCRYPTED password 'sonar'; CREATE DATABASE sonarqube WITH ENCODING 'UTF8' OWNER sonar TEMPLATE=template0; GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonarqube; \q ``` Next, download SonarQube [here](https://www.sonarqube.org/downloads/) and unzip the content to `/opt/sonarqube/sonarqube-${VERSION}/`. Copy the properties file found [here](https://github.ugent.be/CUBE/project-documentation/blob/master/server/builder/sonar/sonar.properties) into `/opt/sonarqube/sonarqube-${VERSION}/conf`. Next we should create a user and group for Sonar. First, create the group with the command: `sudo groupadd sonar` Now we can create the user, set the user’s home directory to /opt/sonarqube, and add them to the new group with the command: `sudo useradd -c "SonarQube - User" -d /opt/sonarqube/sonarqube-${VERSION}/ -g sonar sonar` Change the ownership of the sonarqube directory with the command: `sudo chown -R sonar:sonar /opt/sonarqube/sonarqube-${VERSION}/` Next, we need to change the user that will run the SonarQube server. Issue the command: `sudo nano /opt/sonarqube/sonarqube-${VERSION}/bin/linux-x86-64/sonar.sh` At the bottom of that file, make sure the RUN_AS_USER line looks like: `RUN_AS_USER=sonar` Next, we need to create a startup file to start SonarQube. Do that with the command: `sudo nano /etc/systemd/system/sonarqube.service` In this new file, paste the following contents: ``` [Unit] Description=SonarQube service After=syslog.target network.target [Service] Type=forking ExecStart=/opt/sonarqube/sonarqube-${VERSION}/bin/linux-x86-64/sonar.sh start ExecStop=/opt/sonarqube/sonarqube-${VERSION}/bin/linux-x86-64/sonar.sh stop User=sonarqube Group=sonar Restart=always LimitNOFILE=65536 LimitNPROC=4096 [Install] WantedBy=multi-user.target ``` Next, download the following compressed tar containing our SonarQube plugins [here](https://drive.google.com/file/d/1H1C6YQSA81e0PeFGFJL5rdesU2g4l0RO/view?usp=sharing). Extract the plugins in this tar to `/opt/sonarqube/sonarqube-${VERSION}/extensions/plugins`. Finally we need to migrate the data from our existing PostgreSQL instance to the new one. For this we have provided a dump file which can be found [here](https://drive.google.com/file/d/1l3eyarSthOadidMlelL4gxxJL6qgjqWg/view?usp=sharing). Importing this dump into the instance by executing the following command: `psql sonarqube < sonar.dump` Now we can finally enable SonarQube: `sudo systemctl start sonarqube` `sudo systemctl enable sonarqube` Access SonarQube at `https://dev.cube.designproject.idlab.ugent.be/sonar/projects` ## Github configuration To get notifications in Github about the status of the Jenkins build pipelines, we needed to setup webhooks in each repository. The configuration of these hooks looks as follows: ![](https://i.imgur.com/2cuvgPJ.png) The payload URL field should contain the base URL of jenkins (e.g. `https://dev.cube.designproject.idlab.ugent.be/jenkins/github-webhook/`) `