# HomeLAB Redux
HomeLAB Installation notes with [ProxMox Port](https://github.com/jiangcuo/Proxmox-Port/) pve8
Sources list:
- https://github.com/jiangcuo/Proxmox-Port/blob/main/help/repo.md
- https://forum.proxmox.com/threads/how-to-run-pve-7-on-a-raspberry-pi.95658/
- https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
# Storage
## RPI Setup
What you need:
- Raspberry Pi 3/4 (4B recommended)
- SD Card (USB-SSD recommended)
- Ethernet connection
- Dietpi OS 64bit (https://dietpi.com/)
- Know-how to flash this Image (Win: RasPiImager, balenaetcher, Win32DiskImager Linux: dd)...
- Knowledge how to connect to your RPi via ssh.
- Around 30min - 2h (depending on the RPi type, clock, disk and network speed)
### RPI5 configs
- Install raspbian 64 lite
- Install packages
```
sudo apt install -y ntp
```
### Headless dietpi set up
Flash the `dietpi` image to the SD card. Once done open `dietpi.txt` in the `boot` partition
And apply changes, in my case for `pmn01` in Italy
- `AUTO_SETUP_KEYBOARD_LAYOUT=us` # Change from gb to us keyboard layout
- `AUTO_SETUP_TIMEZONE=Europe/Rome` # Rome Timezone
- `AUTO_SETUP_NET_WIFI_COUNTRY_CODE=IT` # Even if WIFI is not used, I set it to Italy
- `AUTO_SETUP_NET_HOSTNAME=pmn01` # Hostname
- `AUTO_SETUP_HEADLESS=1` # Headless
- `AUTO_SETUP_SSH_SERVER_INDEX=-2` # Openssh
- `AUTO_SETUP_SSH_PUBKEY=ssh-ed25519 AAAAAAAA111111111111BBBBBBBBBBBB222222222222cccccccccccc333333333333 mySSHkey` # Uncomment and add your ssh [ed25519 public key](https://blog.stribik.technology/2015/01/04/secure-secure-shell.html)
- `AUTO_SETUP_WEB_SERVER_INDEX=-2` # Lighttpd web server
- `AUTO_SETUP_AUTOMATED=1`
- `AUTO_SETUP_GLOBAL_PASSWORD=<REDACTED>` # Add a secure password here
- `AUTO_SETUP_INSTALL_SOFTWARE_ID=130` # Install Python, required for ansible (optional)
- `CONFIG_CHECK_APT_UPDATES=2` # Check and install APT updates
- `CONFIG_SERIAL_CONSOLE_ENABLE=0` # Disable serial console
- `SOFTWARE_DISABLE_SSH_PASSWORD_LOGINS=1` # Disable password logins for SSH
Full config for my DietPi_RPi-ARMv8-Bookworm [here](https://gitlab.com/antonionardella/homelab/-/blob/main/dietpi/proxmox_node/dietpi.txt)
### Set up the host
I set up the host's static IP address at router level.
Set the fixed IP address in the `hosts` file
```
sudo nano /etc/hosts
```
Delete all lines IPv6 related and this is what should be left, in my case for the `10.0.0.61` address
```
127.0.0.1 localhost
10.0.0.61 pmn01
```
### Install Proxmox-Port to arm
The first step after booting up the Raspberry Pi is to get it up to date and install the needed dependencies and some tools
Add the repo
```bash!
sudo sh -c 'echo "deb https://global.mirrors.apqa.cn/proxmox/debian/pve bookworm port" > /etc/apt/sources.list.d/pve-install-repo.list'
```
Add the key
```bash!
sudo curl -L https://mirrors.apqa.cn/proxmox/debian/pveport.gpg -o /etc/apt/trusted.gpg.d/pveport.gpg
```
Update, install and reboot
```bash!
sudo dietpi-update && sudo apt update && sudo apt upgrade -y && sudo apt install -y gnupg curl tmux ifupdown2 raspberrypi-kernel-headers proxmox-ve && sudo apt remove os-prober && sudo reboot
```
Proxmox will be available at `https://IPADDRESS:8006`
### Timesync
- `sudo dietpi-config`
- `Advanced Options`
- `Time sync mode`: `Boot + Hourly`
### (Optional) Monitoring with CheckMK on ARM
Sources:
- https://github.com/chrisss404/check-mk-arm#install-checkmk-to-your-device
# Hardware
## HPE ProLiant Microserver Gen 10 Plus
### Hardware
### FileSystem
| Disk | Quantity | Filesystem | Storage | Share | Usage |
| ---- | -------- | ---------- | ------- | ----- | ----- |
| 256GB MicroSD | 1 | XFS | 256 | N/A | Proxmox |
| 500GB SSD | 1 | ZFS (/tank) | 500 | N/A | CT/VM drives|
| 1TB SSD | 1 | ZFS Disk (/cannon)| 1TB | NFSv4.2 | CT/VM drives, data volumes, ISO, templates, K3S volume data|
| 1TB SSD | 2 | ZFS Mirror | 1TB | NFS4.2 | Data, Photos, Personal backups|
### Management GUI (CockPit)
- Install Proxmox Debian 12 (bookworm) container
- Bind mount `/cannon/{vmctbackup, vmctdata, vmctstore}` with LXC container
- Some prerequisites `apt install -y tuned wget curl lm-sensors xz-utils`
- Install [Cockpit](https://cockpit-project.org)
```bash
. /etc/os-release
echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" > \
/etc/apt/sources.list.d/backports.list
apt update
apt install -t ${VERSION_CODENAME}-backports cockpit
```
- Get basic add-ons AIO (or one-by-one below)
```bash!
sudo apt install -y cockpit-storaged cockpit-networkmanager cockpit-packagekit cockpit-sosreport
```
- Install Storage management
`sudo apt install -y cockpit-storaged`
- Install Network/Firewall management
`sudo apt install -y cockpit-networkmanager`
- Install SoftwareUpdates check
`sudo apt install -y cockpit-packagekit`
- Install Diagnostic report
`sudo apt install -y cockpit-sosreport`
- Install [45Drives file sharing](https://github.com/45Drives/cockpit-file-sharing) to set up NFS and Samba shares
```bash
curl -LO https://github.com/45Drives/cockpit-file-sharing/releases/download/v3.3.4/cockpit-file-sharing_3.3.4-1focal_all.deb
sudo apt install -y ./cockpit-file-sharing_3.3.4-1focal_all.deb
```
- (Optional) Install [45Drives Navigator](https://github.com/45Drives/cockpit-navigator) file browser
```bash
wget https://github.com/45Drives/cockpit-navigator/releases/download/v0.5.10/cockpit-navigator_0.5.10-1focal_all.deb
sudo apt install -y ./cockpit-navigator_0.5.10-1focal_all.deb
```
- (Optional) Install sensors
```bash
wget https://github.com/ocristopfer/cockpit-sensors/releases/latest/download/cockpit-sensors.tar.xz && \
tar -xf cockpit-sensors.tar.xz cockpit-sensors/dist && \
sudo mv cockpit-sensors/dist /usr/share/cockpit/sensors && \
rm -r cockpit-sensors && \
rm cockpit-sensors.tar.xz
```
- (Optional) Install Storage benkmarks
```bash
wget https://github.com/45Drives/cockpit-benchmark/releases/download/v2.1.0/cockpit-benchmark_2.1.0-2focal_all.deb
sudo apt install -y ./cockpit-benchmark_2.1.0-2focal_all.deb
```
### Install ZFS
Sources:
- https://www.cyberciti.biz/faq/installing-zfs-on-debian-12-bookworm-linux-apt-get/
#### ZFS Performance
`sudo vim /etc/modprobe.d/zfs.conf`
Based on the table below:
sys: amd64 RAM: 8096MB
- vfs.zfs.arc_min "5120M"
- vfs.zfs.arc_max "5120M"
In my case for 8096MB RAM
```bash
options zfs zfs_arc_min="5368709120"
options zfs zfs_arc_max="5368709120"
```
#### Network Card unmanaged
Source:
- https://www.networkshinobi.com/cockpit-unmanage-interfaces/
Fix:
When you installed Cockpit in Debian system and some of your interfaces are showing under the Unmanage interfaces section, the solution is to comment out the intefaces in `/etc/network/interfaces`. This will make the network-manager manage the interfaces; therefore, Cockpit will be able to manage the interfaces.
```
root@debian:~# nmcli device status
DEVICE TYPE STATE CONNECTION
ens18 ethernet unmanaged --
lo loopback unmanaged --
```
Comment out the physical interfaces in /etc/network/interfaces.
```
sed -i 's/allow/#allow/g' /etc/network/interfaces
sed -i 's/iface ens18/#iface ens18/g' /etc/network/interfaces
```
Restart the network-manager
`systemctl restart NetworkManager`
## Geekworm RPI4 NAS
- memorypalace - Backup
- cybersynapse - Media
Dietpi 8.4
### Geekworm scripts
Source:
- https://wiki.geekworm.com/XScript
Install git `sudo apt install -y git`
`sudo nano /boot/config.txt`
add `dtoverlay=pwm-2chan` at the end, in my case
```
dtoverlay=disable-wifi
dtoverlay=pwm-2chan
```
```bash
git clone https://codeberg.org/antonionardella/xscript
cd xscript
chmod +x *.sh
```
#### Create service for the PWM fan
```bash
sudo cp -f ./x-c1-fan.sh /usr/local/bin/
sudo cp -f ./x-c1-fan.service /lib/systemd/system
sudo systemctl daemon-reload
sudo systemctl enable x-c1-fan
sudo systemctl start x-c1-fan
```
#### Create service for power
```bash
sudo cp -f ./x-c1-pwr.sh /usr/local/bin/
sudo cp -f x-c1-pwr.service /lib/systemd/system
sudo systemctl daemon-reload
sudo systemctl enable x-c1-pwr
sudo systemctl start x-c1-pwr
```
#### Prepare software shutdown script
`sudo cp -f ./x-c1-softsd.sh /usr/local/bin/`
Create a alias xoff command to execute the software shutdown
```bash
echo "alias xoff='sudo /usr/local/bin/x-c1-softsd.sh'" >> ~/.bashrc
source ~/.bashrc
```
Then you can run `xoff` to execute software shutdown.
#### Test safe shutdown
Software safe shutdown command:
`xoff`
:warning: DON'T run the 'shutdown' linux command to shut down, otherwise the power of shield will not be shut down.
#### Sample
1. The code for fan speed control is now in the x-c1-fan.sh file.
2. fan-rpi.py and fan-pigpio.py are no longer used, and are reserved here for research and use by python lovers only.
#### Uninstall
Uninstall x-c1-fan.service:
```bash
sudo systemctl stop x-c1-fan
sudo systemctl disable x-c1-fan
```
Uninstall x-c1-pwr.service:
```bash
sudo systemctl stop x-c1-pwr
sudo systemctl disable x-c1-pwr
```
Remove the xoff allias on .bashrc file
```bash
sudo sed -i '/xoff/d' ~/.bashrc
source ~/.bashrc
```
### (OPTIONAL) OLED Display
#### Enable I2C
Run `dietpi-config`


Reboot
#### Update and install dependencies
```bash
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install i2c-tools
sudo apt-get -y install python3-pip python3-pil build-essential
sudo pip3 install --upgrade setuptools
sudo pip3 install --upgrade adafruit-python-shell
sudo pip3 install adafruit-circuitpython-ssd1306
sudo pip3 install pi-ina219
sudo pip3 install packaging
sudo pip3 show pi-ina219
```
#### Run the following command to check the oled i2c port
`sudo i2cdetect -y 1`
Possible result:
```bash
pi@raspberrypi:~ $ sudo i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- 3c -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
```
`#3c` - is the OLED display
#### Installation
Download the required scripts
```
cd ~
git clone https://codeberg.org/antonionardella/oled
cd oled
sudo python3 raspi-blinka.py
```
This process may take tens of minutes, please be patient
Answering Y and hitting Enter when reboot prompted. Then renavigate to the oled directly by entering:
After reboot
`cd oled`
Run the script to test the display
`sudo python3 stats.py`
Run the script at Raspberry Pi boot
`sudo crontab -e`
3.6 Add a line at the end of the file that reads like this:
PS: we must toogle to the /home/pi/oled directory because .ttf files is required to locate in current directory,you can refer to x729.py source file, or you can also remove the 'cd /home/pi/oled &&' if you use the absolute path of the ttf file in the source code.
`@reboot cd /home/diepipi/oled && python3 /home/diepipi/oled/stats.py &`
Save and exit. In nano editor, you do that by hitting CTRL + X, answering Y and hitting Enter when prompted.
#### Cleanup
`sudo apt purge build-essentials`
### ZFS
`sudo apt install -y sudo apt install "linux-headers-$(uname -r)" zfsutils-linux zfs-dkms zfs-zed autoconf carton build-essential checkinstall`
#### Install cockpit tools
```bash
wget https://github.com/45Drives/cockpit-zfs-manager/releases/download/v1.3.1/cockpit-zfs-manager_1.3.1-1focal_all.deb
sudo dpkg -i ./cockpit-zfs-manager_1.3.1-1focal_all.deb
```
#### ZFS Performance
`sudo nano /etc/modprobe.d/zfs.conf`
In my case for 4098MB RAM
```bash
options zfs zfs_arc_min="1610612736" # 1536M in bytes
options zfs zfs_arc_max="1610612736" # 1536M in bytes
```
```
sudo reboot
## OR ##
sudo systemctl reboot
```
Check if configuration has been set
`cat /sys/module/zfs/parameters/zfs_arc_min`
and
`cat /sys/module/zfs/parameters/zfs_arc_max`
### ZFS Snapshot backups (AS ROOT ONLY)
Sources:
- https://github.com/psy0rz/zfs_autobackup
#### On the Backup HOST
My host `memorypalace`
Create SSH key and make sure you can connect to the device you want to backup (source device)
##### Install zfs-autobackup and pigz compression
`pip install --upgrade zfs-autobackup`
Add pigz compression
`sudo apt install -y pigz`
##### Create backup destination filesystem
In my case /tank/dejima
#### On the SOURCE device you want to backup
My source `dejima`
##### Install pigz compression
`sudo apt install -y pigz`
##### Define filesystems to backup
Specify the filesystems to snapshot and replicate by assigning a unique group name to those filesystems.
It's important to choose a unique group name and to use the name consistently.
(Advanced tip: If you have multiple sets of filesystems that you wish to backup differently, you may do this by creating multiple group names.)
In this example, assign the group name `memorypalace` to the filesystems to backup.
On the source device, set the `autobackup:memorypalace` zfs property to `true`, FOR THE WHOLE TANK as follows:
`zfs set autobackup:memorypalace=true tank`
Check if property has been added
`zfs get -t filesystem,volume autobackup:memorypalace`
Output
```bash
NAME PROPERTY VALUE SOURCE
tank autobackup:memorypalace true local
tank/backup_restore_test autobackup:memorypalace true inherited from tank
tank/backups autobackup:memorypalace true inherited from tank
tank/photos autobackup:memorypalace true inherited from tank
tank/storage autobackup:memorypalace true inherited from tank
```
If we don't want to backup everything, we can exclude certain filesystem by setting the property to `false`:
`zfs set autobackup:memorypalace=false tank/backup_restore_test`
#### On the Backup HOST
Run the script to test
`zfs-autobackup -v --clear-mountpoint --compress pigz-fast --ssh-source dejima memorypalace tank/backup/dejima`
Done
##### Add to cron
`crontab -e`
Add
`0 2 * * * /usr/local/bin/zfs-autobackup -v --clear-mountpoint --compress pigz-fast --ssh-source dejima memorypalace tank/dejima &`
#### Restore
`zfs send tank/dejima/tank/backup_restore_test@memorypalace-20231204171924 | ssh root@10.0.0.99 "zfs recv tank/restore"`
#### Destroy
To see the list of holds on a snapshot, specify the snapshot name:
`zfs holds zroot/usr/home@snapshot3`
```
NAME TAG TIMESTAMP
zroot/usr/home@snapshot3 keepme Thu Aug 19 11:04 2021
```
Now that this snapshot has a hold, I am unable to delete the snapshot:
```
zfs destroy zroot/usr/home@snapshot3
cannot destroy snapshot zroot/usr/home@snapshot3: dataset is busy
```
If I no longer need to hold this snapshot and actually want to delete it, I’ll need to first release its hold. To do so, specify the name of the hold tag and the name of the snapshot:
```
zfs release keepme zroot/usr/home@snapshot3
zfs holds zroot/usr/home@snapshot3
NAME TAG TIMESTAMP
```
Now the snapshot deletion will succeed:
```
zfs destroy zroot/usr/home@snapshot3
zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/usr/home@snapshot2 140K - 1.99M -
zroot/usr/home@snapshot5 528K - 260M -
```
### ZFS as ISCSI for Proxmox (NOT USED)
Create filesystem
`zfs create tank/dejima/proxmox`
`sudo apt install -y tgt`
Set Up iSCSI Target:
Edit the iSCSI target configuration file: sudo nano /etc/tgt/conf.d/myiscsi.conf
Add the following configuration (customize as needed):
css
<target iqn.2023-12.com.example:target1>
backing-store /tank/dejima/proxmox
initiator-address ALL
incominguser username password
</target>
Replace /dev/zvol/mypool/iscsi with your ZFS volume path.
Set username and password for iSCSI authentication.
The iqn format is: iqn.year-month.com.yourdomain:targetname.
Enable and Start the iSCSI Service:
Enable the service: sudo systemctl enable tgt
Start the service: sudo systemctl start tgt
Verify Configuration:
Check the status of the iSCSI target: sudo tgt-admin --show
Proxmox settings

### (OPTIONAL) CheckMK monitoring agent
- Install prerequisites
`apt install smartmontools`
- Install agent
`wget http://10.0.0.80/gits2501/check_mk/agents/check-mk-agent_2.2.0p17-1_all.deb`
- Install `smart` (S.M.A.R.T. disk monitoring) plugin [custom plugin](https://gitlab.com/antonionardella/homelab/-/tree/main/services/checkmk?ref_type=heads) to support USB connected disks through JMicron adapter
```bash
cd /usr/lib/check_mk_agent/plugins
wget https://gitlab.com/antonionardella/homelab/-/raw/main/services/checkmk/smart?ref_type=heads
chmod +x smart
```
# Network services
## Moonglow (Blue AP with dhcprelay)
Sources:
- https://www.cyberciti.biz/faq/debian-ubuntu-linux-setting-wireless-access-point/
### Install dietpi with hostapd
dietpi.txt
https://codeberg.org/antonionardella/homelab/src/branch/main/dietpi/moonglow/dietpi.txt
### Disable isc-dhcp-server
Disable and stop isc-dhcp-server
```
systemctl disable isc-dhcp-server
systemctl stop isc-dhcp-server
apt purge isc-dhcp-server
```
### Bridge interfaces
Add bridge to hostapd.conf
`nano etc/hostapd/hostapd.conf`
```
...
bridge=br0
...
```
### Bridge interfaces
`apt-get install bridge-utils`
Edit network interfaces as drop-in config
`nano /etc/network/interfaces.d/bridge`
```
auto lo br0
iface lo inet loopback
# wireless wlan0
allow-hotplug wlan0
iface wlan0 inet manual
# eth0 connected to newport
allow-hotplug eth0
iface eth0 inet manual
# Setup bridge
iface br0 inet static
bridge_ports wlan0 eth0
address 10.0.2.3
netmask 255.255.255.0
network 10.0.2.0
## isp router ip, 10.0.2.1 also runs DHCPD ##
gateway 10.0.2.1
dns-nameservers 10.0.0.79
```
Restart services
```
/etc/init.d/networking restart
/etc/init.d/hostapd restart
```
# Set up services
## PiHole (LXC - Alpine)
Sources:
- https://www.datahoards.com/installing-pi-hole-inside-a-proxmox-lxc-container/
- https://gitlab.com/yvelon/pi-hole
- https://pimylifeup.com/raspberry-pi-unbound/
Installation:
- Use alpine 3.18 template
- Permit `doas` as root :thinking_face:
- `vi /etc/doas.d/doas.conf`
- add `permit nopass root as root`
- Enable edge community repositories `https://dl-cdn.alpinelinux.org/alpine/edge/community` to `/etc/apk/repositories`
- `apk update && apk add bash git && apk upgrade`
- Forked and installed PiHole using https://gitlab.com/antonionardella/pi-hole
- `git clone https://gitlab.com/antonionardella/pi-hole`
- `cd pi-hole`
- `bash automated\ install/basic-install.sh`
- :information_source: Say `No` to precompiled FTL
## Traefik (LXC - Alpine)
Sources:
- https://wiki.alpinelinux.org/wiki/Setting_up_a_SSH_server
Installation:
- Install from alpine 3.18 template
- Install, set up and enable SSH manually
- *(Optional) Install python3 for ansible `apk add python3`*
- [Basic configuration](https://codeberg.org/antonionardella/homelab/src/branch/main/services/traefik) with file provider
- [Dynamic files](https://codeberg.org/antonionardella/homelab_traefik_confs/) in `/etc/traefik/dynamic`
## Headscale (LXC - Debian)
Sources:
- https://headscale.net/running-headscale-linux/
Installation:
- Official headscale installation
- REMEMBER TO REMOVE After syslog.service in the headscale.service file
## Install webui
- Add user
```
/usr/sbin/adduser --disabled-password webui
/usr/sbin/adduser webui webui
mkdir -p /opt/webui/
chmod -R 700 /opt/webui
chown -R webui:webui /opt/webui
```
- Get node with nvm as webui
```
su webui
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
```
- Load nvm
```
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
```
- Get LTS
```
nvm install --lts
```
- Get sourcecode
```
cd /opt/webui/
git clone https://github.com/GoodiesHQ/headscale-admin
cd headscale-admin/
```
- Install webui (Make sure to assign 2GB RAM for building)
```
npm install
export ENDPOINT="/admin"
npm run build
mv build admin
```
- Run
```
cd admin
npm run preview -- --host 0.0.0.0
```
## Headscale in services LXC
Sources:
- https://tailscale.com/kb/1130/lxc-unprivileged/
To bring up Tailscale in an unprivileged container, access to the /dev/tun device can be enabled in the config for the LXC. For example, using Proxmox 7.0 to host as unprivileged LXC with ID 112, the following lines would be added to /etc/pve/lxc/112.conf:
```bash!
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
```
## PostreSQL (LXC - Debian)
Sources:
- https://linux.how2shout.com/install-postgresql-database-server-on-debian-12-bookworm-linux/
## Vaultwarden (LXC- Debian)
15GB Disk
Sources:
- https://geekistheway.com/2022/12/27/deploying-a-public-vaultwarden-instance-on-a-proxmox-lxc-container-using-haproxy-on-pfsense/
- https://github.com/nodesource/distributions#debian-and-ubuntu-based-distributions
- https://github.com/dani-garcia/vaultwarden
Install:
### Server component
Note that all commands below are executed as root, so you may need to add “sudo” on them if you prefer that way. You can refer to vaultwarden’s original wiki for full steps, including the variants on how to use MySQL or PostgreSQL as database, but the steps below are enough to get things going using SQLite3 as database.
Rust
```
apt update && apt install -y curl git wget # basic tools
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # latest rust, not from OS package manager
source ~/.cargo/env # Reload terminal' with Rusts new 's environment
```
Node 20
```
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
NODE_MAJOR=20
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
sudo apt-get update
sudo apt-get install nodejs -y
```
SQLite
```
apt install -y build-essential pkg-config libssl-dev # basic vaultwarden deps
apt install -y libsqlite3-dev sqlite3 # database-specific
```
Vaultwarden
```
mkdir -p /opt/ && cd /opt/ # installing at /opt/vaultwarden
git clone --recursive https://github.com/dani-garcia/vaultwarden.git # vaultwarden
cd vaultwarden
git tag # list all stable versions
git checkout 1.32.0 # or whatever latest version they have
cargo build --features sqlite --release # build vaultwarden with sqlite database
ln -s /opt/vaultwarden/target/release/vaultwarden /opt/vaultwarden/
```
### Web-Portal
```
export NODE_OPTIONS="--max-old-space-size=2048" # bw_web_builds build may fail without that
cd /opt
git clone https://github.com/dani-garcia/bw_web_builds.git # web-portal code
cd bw_web_builds
git tag
make full # type 'v2023.12.0' or a newer tag at the prompt
ln -s /opt/bw_web_builds/web-vault/apps/web/build/ /opt/vaultwarden/web-vault
```
At this point, vaultwarden will be compiled at /root/vaultwarden/target/release/vaultwarden.
### Edit env
`vi /opt/vaultwarden/vaultwarden.env`
### Data
All data resides in `/opt/vaultwarden/data`
### Set up service
`vi /etc/systemd/system/vaultwarden.service`
```
[Unit]
Description=Bitwarden Server (Rust Edition)
Documentation=https://github.com/dani-garcia/vaultwarden
# If you use a database like mariadb,mysql or postgresql,
# you have to add them like the following and uncomment them
# by removing the `# ` before it. This makes sure that your
# database server is started before vaultwarden ("After") and has
# started successfully before starting vaultwarden ("Requires").
# Only sqlite
After=network.target
# MariaDB
# After=network.target mariadb.service
# Requires=mariadb.service
# Mysql
# After=network.target mysqld.service
# Requires=mysqld.service
# PostgreSQL
# After=network.target postgresql.service
# Requires=postgresql.service
[Service]
# The user/group vaultwarden is run under. the working directory (see below) should allow write and read access to this user/group
User=vaultwarden
Group=vaultwarden
# Use an environment file for configuration.
EnvironmentFile=/opt/vaultwarden/vaultwarden.env
# The location of the compiled binary
ExecStart=/opt/vaultwarden/vaultwarden
# Set reasonable connection and process limits
LimitNOFILE=1048576
LimitNPROC=64
# Isolate vaultwarden from the rest of the system
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
ProtectSystem=strict
# Only allow writes to the following directory and set it to the working directory (user and password data are stored here)
WorkingDirectory=/opt/vaultwarden
ReadWriteDirectories=/opt/vaultwarden
# Allow vaultwarden to bind ports in the range of 0-1024
AmbientCapabilities=CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
```
`systemctl daemon-reload`
Test
`systemctl start vaultwarden.service`
Enable
`systemctl enable vaultwarden.service`
### Adding vaultwarden user
```
/usr/sbin/adduser --disabled-password vaultwarden
/usr/sbin/adduser vaultwarden vaultwarden
mkdir /opt/vaultwarden/data
chmod -R 700 /opt/vaultwarden
chown -R vaultwarden:vaultwarden /opt/vaultwarden
```
### Upgrading Vaultwarden and Web-portal
- Stop service
`systemctl stop vaultwarden`
- Upgrade rust with rustup
`rustup update`
- Upgrade vaultwarden
```
# Vaultwarden
cd /opt/vaultwarden # since it is installed at /opt/vaultwarden
git pull # update repo
git tag # list all stable versions
git stash # stash changes made during last compilation
git checkout 1.32.0 # or whatever latest version they have
cargo build --features sqlite --release # build vaultwarden with sqlite database
ln -s /opt/vaultwarden/target/release/vaultwarden /opt/vaultwarden/
# Web portal
cd /opt/bw_web_builds
git pull
git tag
git stash
make full # type 'v2023.12.0' or a newer tag at the prompt
ln -s /opt/bw_web_builds/web-vault/apps/web/build/ /opt/vaultwarden/web-vault
```
### (Optional) Publish as Headscale only service
Follow the [Tail(head)scale-network-services](https://hackmd.io/nnEzm_ByQ5mGMMycA15iQQ?both#Tailheadscale-network-services) section first. Then come back to set up the service
## Firefox sync server
Sources:
- https://github.com/mozilla-services/syncstorage-rs
Install:
-
## Proton Bridge
Sources:
- https://proton.me/support/bridge-cli-guide
- https://gist.github.com/githubcom13/2f30f46cd5273db2453a6e7fdb3c422b
Install:
- `git clone https://github.com/ProtonMail/proton-bridge.git`
- run bash
```
#!/bin/bash
# Define the list of files
files=(
"internal/constants/constants.go"
"internal/focus/service.go"
"internal/frontend/grpc/service.go"
"utils/port-blocker/port-blocker.go"
"utils/smtp-send/main.go"
)
# Loop through each file
for file in "${files[@]}"; do
# Use sed with -i flag to modify the file in-place
sed -i "s/127.0.0.1/0.0.0.0/g" "$file"
# Print a message for each modified file
echo "Replaced '127.0.0.1' with '0.0.0.0' in $file"
done
echo "All files processed."
```
- `sudo apt-get install rng-tools-debian pass gnome-keyring libsecret-1-dev pkg-`
- `make build-nogui`
- Initialize `sudo -u bridges ./bridge --cli init`
- Generate PGP key `sudo -u bridges gpg --full-generate-key`
- `sudo -u bridges pass init info@antonionardella.it`
- `sudo -u bridges gpg --list-keys`
- `sudo -u bridges pass init info@antonionardella.it`
- Then follow instructions above to create service with screen
### Upgrade
- `git pull`
- `git checkout latest branch`
- run bash from above
- `make build-nogui`
## Protonbridge x64 Debian 12 server
Sources:
- https://gist.github.com/githubcom13/2f30f46cd5273db2453a6e7fdb3c422b
- https://meta.discourse.org/t/use-protonmail-bridge-with-discourse/229503
### Get the protonmail bridge linux installer
Download the latest package into your computer.
```
wget https://protonmail.com/download/bridge/protonmail-bridge_3.12.0-1_amd64.deb
```
The link above is working at the time of writing this article, but as the bridge team pointed out, they will expire all the previous links once they release a new version to encourage the installation of the latest version.
To get the latest version try replacing/increasing the version numbers on the link provided or write an email to bridge@protonmail.ch (https://protonmail.com/support/knowledge-base/bridge-for-linux/)
### Install protonmail bridge
We will need root access for the setup
Install the protonmail bridge client
```
dpkg -i protonmail-bridge_3.12.0-1_amd64.deb
```
### Install additional tools required for the setup
Install the "pass" password manager that protonmail bridge will use to store the passwords
```
apt install pass
```
Install the "tmux" utility to daemonize the protonmail bridge client
```
apt install tmux
```
### Create a new user
We will create a new user mainly to isolate the access to the passwords of other users.
Notice that the new user will be locked to disable access to this user from outside.
```
useradd protonmail
usermod -L protonmail
```
Create a protonmail directory in /home
```
cd /home
mkdir protonmail
```
Change folder owner
```
chown -R protonmail:protonmail /home/protonmail
```
### Setup "pass" password manager
Login as the new isolated user
```
su protonmail
cd ~
```
Run a script session to avoid the PGP key passphrase prompt to fail (https://bugzilla.redhat.com/show_bug.cgi?id=659512).
This is required if we are not using a graphical interface due to the way our isolated user runs the shell commands
```
script /dev/null
```
Generate PGP key pair for the new user with an empty passphrase.
The empty passphrase is required to run the protonmail bridge on the background on system startup without being prompted for the password and hence causing the process to fail.
```
gpg --full-generate-key
>>>> Choose 1 (1) RSA and RSA (default)
>>>> Choose 4096 (default)
>>>> Choose 0 0 = key does not expire
>>>> Type your name e.g. Proty McProtonFace
>>>> Type your email e.g. a@a.com
>>>> Leave empty comment
>>>> Leave empty passphrase
```
List the keys to ensure they were created correctly
```
gpg --list-keys
```
Init the password manager for the chosen email address in the PGP keys step
```
pass init a@a.com
```
### Setup the protonmail bridge client
At this point we already set up the password manager that will allow the protonmail bridge to store the passwords so we will now setup your protonmail account.
```
protonmail-bridge --cli
>>>> add (add your protonmail account to bridge)
>>>> (enter your protonmail account email address)
>>>> (enter your protonmail account password)
>>>> list (list configured accounts)
>>>> info (list SMTP credentials for configuring any local SMTP compatible service)
>>>> help (get familiarized with the bridge options)
>>>> info (get ports, username and password for the email client)
>>>> exit (exit the bridge console which stops the local SMTP server created)
```
:information_source: Remember to use SOCAT below and to adapt IP and ports
Exit the scripted mode of the isolated user if you previously ran "script /dev/null"
```
exit
```
### Daemonize the protonmail bridge client
In order to start automatically the bridge client on system startup we will create a script to run it in the background.
Notice that we will use the "screen" utility since there is no way to run the protonmail linux client in the background currently without a graphical interface.
For this we will need root access again.
```
exit
```
Create a basic script that will be able to launch the protonmail bridge client in the background and kill it.
```
mkdir /var/lib/protonmail
nano /var/lib/protonmail/protonmail.sh
```
Copy the content
```
[Unit]
Description=Service to run the Protonmail bridge client
After=network.target
[Service]
Type=oneshot
User=protonmail
ExecStart=/var/lib/protonmail/protonmail.sh start
ExecStop=/var/lib/protonmail/protonmail.sh stop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Make it executable
```
chmod +x /var/lib/protonmail/protonmail.sh
```
Create a systemd service
```
nano /etc/systemd/system/protonmail.service
```
Copy the content
```
#!/bin/bash
case "$1" in
start)
# Will create a tmux session in detached mode (background) with name "protonmail"
tmux new-session -d -s protonmail protonmail-bridge --cli
echo "Service started."
;;
status)
# ignore this block unless you understand how tmux works and that it only lists the current user's sessions
if tmux has-session -t protonmail; then
echo "Protonmail bridge service is ON."
else
echo "Protonmail bridge service is OFF."
fi
;;
stop)
# Will quit a tmux session called "protonmail" and therefore terminate the running protonmail-bridge process
tmux kill-session -t protonmail
echo "Service stopped."
;;
*)
echo "Unknown command: $1"
exit 1
;;
esac
```
Enable the script so that it can run on system startup
```
systemctl enable protonmail
```
Test the protonmail service
```
systemctl start protonmail
netstat -tulpn | grep 1025
```
Reboot you system and check if protonmail bridge is bound to the default ports
```
reboot
netstat -tulpn | grep 1025
```
### Configure SMTP services
Now that you have the protonmail bridge running in the background you can configure SMTP emails on local instances of Jenkins, Jira, Bitbucket, Thunderbird or any service of your choice.
Remember that required credentials and configuration details can be found by executing:
```
protonmail-bridge --cli
>>>> info
>>>> exit
```
### Socat to enable external access
Problem: Protonmail Bridge cannot listen to IP other than 127.0.0.1, and we want to use this container headlessly
Solution: We configure it to redirect the ports to 127.0.0.1, on which the Protonmail Bridge is listening.
We do it with using Socat.
#### How To
Start by installing Socat:
```
apt install socat
```
We will run Socat as a service. Create service file:
`nano /etc/systemd/system/protonsocat.service` and put the following content in it:
```
[Unit]
Description=Socat Bridge ProtonMail
After=protonmail.service
[Service]
ExecStart=/bin/sh -c "socat -d -d -lm TCP4-LISTEN:1026,fork,reuseaddr TCP4:127.0.0.1:1025 & socat -d -d -lm TCP4-LISTEN:1144,fork,reuseaddr TCP4:127.0.0.1:1143"
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
```
Notes:
* We configured Socat to listen to all IPs an interfaces
* port 1026, and redirect the traffic to 127.0.0.1:1025
* port 1144, and redirect the traffic to 127.0.0.1:1025 where Protonmail Bridge is listening
* Example from Socat documentation: <strong>socat</strong> 8
* -d -d will print the fatal, error, warning, and notice messages; you may not want that much detail logged
Reload daemon and enable it
```
systemctl daemon-reload
```
Next, start the service:
```
systemctl start protonsocat
```
Check that it is running:
```
systemctl status protonsocat
```
And automatically get it to start on boot:
```
systemctl enable protonsocat
```
#### Set up mail client

Confirm security exception

#### Problems
**Note:** When sending an email via PHPMailer, the following message is displayed:
```
Connection failed. Error #2: stream_socket_client(): unable to connect to 127.0.0.1:1026 (Connection refused)
SMTP ERROR: Failed to connect to server: Connection refused (111)
```
OR
```
SMTP INBOUND: "454 4.7.0 account is logged out, use the app to login again"
SERVER -> CLIENT: 454 4.7.0 account is logged out, use the app to login again
SMTP ERROR: Password command failed: 454 4.7.0 account is logged out, use the app to login again
SMTP Error: Could not authenticate.
```
**Solution 1 :**
More than one process listens on the same port. Changing the port in Protonmail-bridge may correct the problem.
To solve it I had to:
Login as the new isolated user
```
su protonmail
cd ~
```
This is required if we are not using a graphical interface due to the way our isolated user runs the shell commands
```
script /dev/null
```
Change port setting
```
change port
```
**Solution 2 :**
Two user processes (root and protonmail) are executed at the same time.
1. Stopping the "proton-bridge" process using the killall command
```
killall -9 proton-bridge
```
2. Full uninstall protonmail-bridge
```
apt purge protonmail-bridge
```
2. remove all protonmail folders and configuration files in the 'root' profile
3. remove the protonmail folder in the "home" folder
```
rm -rf /home/protonmail
```
4. reboot
5. Repeat the protonmail-bridge installation procedure
#### Problems
**Note:** When running Bridge on command line, I the following message is printed:
```
WARN[0000] Failed to add test credentials to keychain error="exit status 1: gpg: Passwords: skipped: No public key\ngpg: [stdin]: encryption failed: No public key\nPassword encryption aborted.\n" helper="*pass.Pass"
```
He had a bug with your keyring and pass.
**Solution:**
To solve it I had to:
1. uninstall gnupg and pass
`apt remove gnupg pass`
2. delete the `.gnupg` and `.password-store` folders
```
rm -rf /home/protonmail/.gnupg
rm -rf /home/protonmail/.password-store
```
3. reinstall gnupg and pass
`apt install gnupg pass`
4. login as the new isolated user
```
su protonmail
cd ~
```
5. run a script session to avoid the PGP key passphrase prompt to fail
`script /dev/null`
6. run gpg to create the database and its folder
`gpg --list-keys`
7. create a new key
```
gpg --full-generate-key
>>>> Choose 1 (1) RSA and RSA (default)
>>>> Choose 2048 (default)
>>>> Choose 0 0 = key does not expire
>>>> Type your name e.g. Proty McProtonFace
>>>> Type your email e.g. a@a.com
>>>> Leave empty comment
>>>> Leave empty passphrase
```
8. Init the password manager for the chosen email address in the PGP keys step
`pass init a@a.com`
9. List the keys to ensure they were created correctly
`gpg --list-keys`
10. Setup the protonmail bridge client, follow the procedure I described here
## Reticulum
Sources:
- https://reticulum.network/manual/gettingstartedfast.html#install-guides
Install:
- Prereqs
`apt install -y tmux sudo pipx`
- User
```
useradd -r -s /usr/sbin/nologin reticulum
mkdir /home/reticulum
chown reticulum:reticulum /home/reticulum
usermod -a -G sudo reticulum
```
- Install as user
- `sudo -u reticulum pipx ensurepath`
```
sudo -u reticulum echo "export PATH=/home/reticulum/.local/bin:${PATH}" | sudo tee -a $HOME/.profile
source $HOME/.profile
```
- `sudo -u reticulum pipx install rns`
## SSH-Chat
Sources:
- https://www.linuxcapable.com/how-to-install-golang-go-on-debian-linux/
- https://github.com/shazow/ssh-chat/wiki/Deployment
- https://github.com/shazow/ssh-chat
Install:
- Prereqs
`apt install -y wget git tmux build-essential`
- Get go
```
wget https://golang.org/dl/go1.21.5.linux-arm64.tar.gz
sudo tar -C /usr/local -xzf go1.21.5.linux-arm64.tar.gz
echo "export PATH=/usr/local/go/bin:${PATH}" | sudo tee -a $HOME/.profile
source $HOME/.profile
go version
```
- User
```
useradd -r -s /usr/sbin/nologin ssh-chat
mkdir /home/ssh-chat
chown ssh-chat:ssh-chat /home/ssh-chat
```
- Get code and compile
```
git clone https://github.com/shazow/ssh-chat.git
cd ssh-chat
git checkout v1.11-rc5
make build
cp ssh-chat /usr/bin/
```
- Generate keys
```
sudo -u ssh-chat ssh-keygen -t ed25519 -a 300 -f /home/ssh-chat/.ssh/ssh-chat-key
```
- Touch authorized_key file
```
sudo -u ssh-chat touch /home/ssh-chat/.ssh/authorized_keys
```
- MOTD
`sudo -u ssh-chat printf "\033[97mAnd can \033[91myou \033[97moffer me \033[91mProof \033[97mof \033[91mYour \033[97mexistence?\nHow can \033[91mYou\033[97m, when neither \033[91mModern \033[91mScience \033[97mnor \033[91mPhilosophy \033[97mcan explain what \033[91mLife \033[97mis?\033[0m\n" >/home/ssh-chat/motd.txt`
- Log
```
touch /var/log/ssh-chat.log
chown root:ssh-chat /var/log/ssh-chat.log
chmod 766 /var/log/ssh-chat.log
```
- Service (Replace /PATH/TO/)
`vi /etc/systemd/system/ssh-chat.service`
```
[Unit]
Description=ssh-chat
After=network.target
[Service]
Type=simple
User=ssh-chat
ExecStart=/usr/bin/ssh-chat --bind=":2501" -i="/home/ssh-chat/.ssh/ssh-chat-key" --admin="/home/ssh-chat/.ssh/authorized_keys" --motd="/home/ssh-chat/motd.txt" --log="/var/log/ssh-chat.log" #--allowlist="/home/ssh-chat/.ssh"
AmbientCapabilities=CAP_NET_BIND_SERVICE
Restart=always
[Install]
WantedBy=multi-user.target
```
- Enable
```
systemctl daemon-reload
systemctl start ssh-chat
systemctl enable ssh-chat
```
## Matterbridge
Sources:
- https://github.com/42wim/matterbridge/
Installation:
- Get
```
wget https://github.com/42wim/matterbridge/releases/download/v1.26.0/matterbridge-1.26.0-linux-64bit -O /usr/bin/matterbridge
chmod +x /usr/bin/matterbridge
```
- Conf
`mkdir -p /etc/matterbridge`
`cd /etc/matterbridge`
`vi matterbridge.toml`
Example
```
[discord]
[discord.nard]
Token="TOKEN"
Server="SERVERID"
AllowMention=["everyone", "roles", "users"]
ShowEmbeds=true
UseUserName=true
UseDiscriminator=false
AutoWebhooks=true
EditDisable=false
EditSuffix=" (edited)"
RemoteNickFormat="[{PROTOCOL}] <{NICK}> "
ShowJoinPart=false
StripNick=false
ShowTopicChange=false
SyncTopic=false
[telegram]
[telegram.laughingman]
Token="TOKEN"
RemoteNickFormat="{NICK} "
MessageFormat="HTMLNick"
[sshchat]
[sshchat.mychat]
Server="address:port" # eg "localhost:2022" or "1.2.3.4:22"
Nick="matterbridge"
RemoteNickFormat="[{PROTOCOL}] <{NICK}> "
[[gateway]]
name="Discord to Telegram"
enable=true
[[gateway.in]]
account="discord.nard"
channel="ID:CHANNELID"
[[gateway.out]]
account="telegram.laughingman"
channel="-CHANNELID"
```
- User
```
useradd -r -s /usr/sbin/nologin bridges
mkdir /home/bridges
chown bridges:bridges /home/bridges
```
- Service `vi etc/systemd/system/matterbridge@.service`
```
[Unit]
Description=Matterbridge, connect platforms %I
After=network.target
[Service]
User=bridges
ExecStart=/usr/bin/matterbridge -conf /etc/matterbridge/matterbridge.toml
Restart=on-failure
SuccessExitStatus=3 4
RestartForceExitStatus=3 4
[Install]
WantedBy=multi-user.target
```
- Reload and start
```
systemctl daemon-reload
systemctl start matterbridge@matterbridge
systemctl enable matterbridge@matterbridge
```
## Synchthing (LXC - Debian - Priviledged)
Sources:
- https://www.miguelvallejo.com/install-syncthing-debian-11/
- https://techviewleo.com/configuring-syncthing-file-synchronization-on-debian/
- https://authmane512.medium.com/how-to-install-syncthing-on-a-debian-server-and-configure-it-5fb905a2dbac
Installation:
- Set up user
```
useradd -r -s /usr/sbin/nologin syncthing
mkdir /home/syncthing
chown syncthing:syncthing /home/syncthing
```
- Install `apt install syncthing -y`
- Service `vi /etc/systemd/system/syncthing@.service`
```
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target
[Service]
User=%i
ExecStart=/usr/bin/syncthing -no-browser -gui-address="0.0.0.0:8384" -no-restart -logflags=0
Restart=on-failure
SuccessExitStatus=3 4
RestartForceExitStatus=3 4
[Install]
WantedBy=multi-user.target
```
- Reload and start
```
systemctl daemon-reload
systemctl start syncthing@syncthing
systemctl enable syncthing@syncthing
```
- Add NFS share `/mnt/syncthing/`
- Fix permissions `chown -R syncthing:syncthing /mnt/syncthing/`
- Create a Systemd Service File
Open a text editor to create a new service file. For example, you can use nano:
`sudo nano /etc/systemd/system/nfs-mount.service`
Add the following content to the file:
```
[Unit]
Description=Mount NFS Share
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
ExecStart=/bin/mount -t nfs nfs-server:/path/to/share /mnt/mountpoint
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Replace nfs-server:/path/to/share with your NFS server's address and NFS share path, and /mnt/mountpoint with the desired mount point.
- Enable and Start the Service
Reload the systemd manager configuration:
`sudo systemctl daemon-reload`
Enable the new service so that it starts on boot:
`sudo systemctl enable nfs-mount.service`
Start the service immediately to test it:
`sudo systemctl start nfs-mount.service`
- Verify the Mount
Check the status of the service to ensure it started without errors:
`sudo systemctl status nfs-mount.service`
Verify that the NFS share is mounted:
`mount | grep nfs`
## Librechat (LXC - Ubuntu)
Sources:
- https://github.com/danny-avila/LibreChat
Install:
- `useradd -m -s /bin/bash pm2`
- `passwd pm2`
- Install [nvm](https://github.com/nvm-sh/nvm#installing-and-updating)
- `su pm2`
- `curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash`
- Install nodeJS 20
- `nvm install v20`
- Install [LibreChat](https://github.com/danny-avila/LibreChat)
- `cd /usr/local`
- `git clone https://github.com/danny-avila/LibreChat`
- Follow installation
- get pm2 to manage the service
- `npm install -g pm2`
- `pm2 startup`
- `cd /usr/local/LibreChat`
- `HOST=0.0.0.0 pm2 start "npm run backend" --name LibreChat`
- `pm2 save`
## SimplechatX server(s)
Sources:
- https://simplex.chat/docs/server.html
- https://github.com/simplex-chat/simplexmq#using-your-distribution
Installation SMP server with TOR enabled
- `apt install -y curl build-essential libgmp3-dev zlib1g-dev git libssl-dev lsb-release`
- `curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh`
```
<Enter>
<P> <Enter>
<Y> <Enter>
<Y> <Enter>
<Enter>
```
- Refresh shell and install ghc and cabal
`source .profile`
```
ghcup install ghc 9.6
ghcup install cabal
ghcup set ghc 9.6
ghcup set cabal
```
- Clone and install
`curl --proto '=https' --tlsv1.2 -sSf https://codeberg.org/antonionardella/simplexmq/raw/branch/main/install.sh -o install.sh && sh install.sh && rm install.sh`
- Check that a TOR hostname is active
`cat /var/lib/tor/simplex-smp/hostname`
- Init smp-server with password
> Set password to create new messaging queues
```
sudo su smp -c "smp-server init -y -l -a ED25519 --fqdn $(cat /var/lib/tor/simplex-smp/hostname) --password <INSERT A VERY STRONG PASSWORD>"
```
- Copy details to the smp server
- Enable and start service
```
sudo systemctl enable smp-server.service
sudo systemctl start smp-server.service
```
- Init xftp-server
> Init with 10gb quota
```
sudo su xftp -c "xftp-server init -l -a ED25519 --fqdn $(cat /var/lib/tor/simplex-xftp/hostname) -q '10gb' -p /srv/xftp/"
```
- Copy details to the xftp server
- Enable and start service
```
sudo systemctl enable xftp-server.service
sudo systemctl start xftp-server.service
```
- Add password under [AUTH] section uncomment create_password and change it
`sudo su xftp -c "vim /etc/opt/simplex-xftp/file-server.ini"`
- Restart service
`sudo systemctl restart xftp-server.service`
## Forgejo (Debian)
Sources:
- https://forgejo.org/docs/latest/admin/installation-binary/
No SQL server, but SQLite
NFS Mount for repo data
Installation:
- `apt install -y curl wget git git-lfs`
- `wget https://codeberg.org/forgejo/forgejo/releases/download/v8.0.3/forgejo-8.0.3-linux-arm64`
- `cp forgejo-7.0.4-linux-arm64 /usr/local/bin/forgejo`
- `chmod 755 /usr/local/bin/forgejo`
- `adduser --system --shell /bin/bash --gecos 'Git Version Control' \
--group --disabled-password --home /home/git git`
- `mkdir /var/lib/forgejo`
- `chown git:git /var/lib/forgejo && chmod 750 /var/lib/forgejo`
- `mkdir /etc/forgejo`
- `chown root:git /etc/forgejo && chmod 770 /etc/forgejo`
- `wget -O /etc/systemd/system/forgejo.service https://codeberg.org/forgejo/forgejo/raw/branch/forgejo/contrib/systemd/forgejo.service`
- `shutdown`
- Set up NFS `mountpoint` for the `/var/lib/forgejo` directory
- **WITHIN THE PROXMOX HOST** mount the NFS share
- `nano /etc/pve/lxc/<ID>.conf`
- `mp0: /mnt/pve/section9-vmctdata/forgejo,mp=/var/lib/forgejo`
- Set up Tailscale support
- `lxc.cgroup2.devices.allow: c 10:200 rwm`
- `lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file`
- Start container
- Connect to IP:3000 continue installation
### Enable SSH
- `vi /etc/forgejo/app.ini`
```
DISABLE_SSH = false
SSH_PORT = 22
START_SSH_SERVER = true
SSH_LISTEN_PORT = 2222
```
- Make sure to add PublickeyAuthentication = yes to sshd_config
### Upgrade
- Get latest release from https://codeberg.org/forgejo/forgejo/releases
- `systemctl stop forgejo`
- `cp forgejo-{version}-linux-arm64 /usr/local/bin/forgejo
chmod 755 /usr/local/bin/forgejo`
- `systemctl start forgejo`
## Code server
Sources:
- https://github.com/coder/code-server
Installation:
- `curl -fsSL https://code-server.dev/install.sh | sh`
- `useradd -m code`
- `sudo systemctl enable --now code-server@code`
- Change bind address
- `vi ~/.config/code-server/config.yaml`
`bind-addr: 0.0.0.0:8080`
- Add tailscale service
- Expose via Tailscale traefik
- Change service options as disable telemetry
`vi /lib/systemd/system/code-server@.service`
```
[Unit]
Description=code-server
After=network.target
[Service]
Type=exec
ExecStart=/usr/bin/code-server --disable-telemetry --disable-getting-started-override
Restart=always
User=%i
[Install]
WantedBy=default.target
```
## Tail(head)scale network services
Sources:
- https://github.com/FiloSottile/mkcert
- https://www.mariobuonomo.dev/blog/traefik-ssl-self-signed-development-mode
- https://web.dev/articles/how-to-use-local-https
- https://doc.traefik.io/traefik/https/tailscale/
In this setup we use an alpine container with `traefik` and `mkcert` to route the internal services within the tail(head)net and to generate SSL (TLS) certificates. This way configured services will be made available only within the tail(head)net, with SSL (TLS) and will only accessible from clients within this secured network.
### Install mkcert
In this case it will be installed in the existing `traefik`, [alpine based container](https://hackmd.io/nnEzm_ByQ5mGMMycA15iQQ?both#Traefik-LXC---Alpine)
This container will create a wildcard certificate using `mkcert` for the local network domain (example.lan) which will be used to serve services on the head/tailnet with SSL support
- `apk add go`
- `git clone https://github.com/FiloSottile/mkcert && cd mkcert`
- `go build -ldflags "-X main.Version=$(git describe --tags)"`
- `go install`
### Connect mountpoint for certificates
Mount common NFS share (e.g. section9/cannon/vmctdata/certs)
Add mountpoint to lxc container (e.g. container `102`)
- `nano 102.conf`
- `mp0: section9/cannon/vmctdata/certs,mp=/etc/traefik/ssl`
### Initiate CA and create certs
:warning: My GOPATH is not set, therefore `mkcert` is called using the path prefix `go/bin/`
- `go/bin/mkcert -install` # Create Certificate Authority
- `go/bin/mkcert -ecdsa -key-file /etc/traefik/ssl/key.pem -cert-file /etc/traefik/ssl/cert.pem example.lan "*.example.lan" example.dmz` # the example.dmz can be used for a separate network with devices in the [DMZ](https://en.wikipedia.org/wiki/DMZ_(computing))
### Add Tailscale to Traefik container
Follow [Headscale in services LXC](https://hackmd.io/nnEzm_ByQ5mGMMycA15iQQ?both#Headscale-in-services-LXC) to add tailscale to traefik container
- make sure tailscale starts at boot `rc-update add tailscale`
### pi.hole DNS configuration
The [pi.hole](https://hackmd.io/nnEzm_ByQ5mGMMycA15iQQ?both#PiHole-LXC---Alpine) container is used as DNS server for the Tail(head)scale network. At this point add the desiderd service to the DNS with the headscale network IP of the `traefik` container on the headscale network.
e.g.
- domain `vaultwarden.example.lan`
- `traefik` container IP: `100.64.0.23`

For all services you want to add e.g. `nextcloud.example.lan` add always the `traefik` container IP. Within `traefik` add the `router` to the traefik configuration.
### Traefik dynamic configuration for the SSL certs
I will add a new configuration file for the dynamic setting to [traefik](https://hackmd.io/nnEzm_ByQ5mGMMycA15iQQ?both#Traefik-LXC---Alpine) to make sure it loads the previously generated certificates
- `nano /etc/traefik/dynamic/tls.yaml`
```
tls:
certificates:
- certFile: /etc/traefik/ssl/cert.pem
keyFile: /etc/traefik/ssl/key.pem
stores:
default:
defaultCertificate:
certFile: /etc/traefik/ssl/cert.pem
keyFile: /etc/traefik/ssl/key.pem
```
- Now add router for vaultwarden in the tail(head)net
- `nano /etc/traefik/dynamic/head-vaultwarden.yaml`
```
http:
routers:
vaultwarden:
rule: "Host(`vaultwarden.example.lan`)"
service: service-vaultwarden
middlewares:
- websecure-redirect-vw
entryPoints:
- websecure
tls: {}
middlewares:
websecure-redirect-vw:
redirectScheme:
scheme: https
permanent: true
services:
service-vaultwarden:
loadBalancer:
servers:
- url: "http://100.64.0.19:8000" # tail(head)net address of your vaultwarden container
```
### Export certificate to your clients
At this point to make sure that your clients accept the self signed certificates generated with `mkcerts` it is necessary to [export and import the certificate authority](https://github.com/FiloSottile/mkcert?tab=readme-ov-file#installing-the-ca-on-other-systems).
Back in the `traefik` container with `mkcerts`
- `go/bin/mkcert -CAROOT` # To find out where the root certificate is
- `cat /root/.local/share/mkcert/rootCA.pem`
- Copy the output to `rootCA.pem` locally and import it in your browser and android devices. See https://bitwarden.com/help/certificates/ for more information.
## ARM64 templates in Proxmox
The goal is to have a custom `/usr/share/doc/pve-manager/aplinfo.dat` file with the arm64 templates from the [Linuxcontainers](https://jenkins.linuxcontainers.org/view/Images/) collection
Sources:
- https://stevetech.me/posts/find-arm64-lxc-templates
- https://github.com/proxmox/pve-manager/tree/51fcf81434554a8e9783883b2a306e853670a8f6/aplinfo
```
import feedparser
import requests
# Define the URL of the RSS feed
rss_url = "https://jenkins.linuxcontainers.org/rssLatest"
# Define a dictionary to map image names to their corresponding release values as a list
release_map = {
"image-alpine": ["3.18"],
"image-archlinux": ["current"],
"image-debian": ["bullseye", "bookworm"],
"image-fedora": ["38", "39"],
"image-ubuntu": ["focal", "lunar"],
"image-nixos": ["current"],
# Add more mappings for other image names as needed
}
# Parse the RSS feed
feed = feedparser.parse(rss_url)
# Check if parsing was successful
if feed.status == 200:
# Initialize an empty list to store the generated links
image_links = []
# Extract and generate links from the RSS feed
for entry in feed.entries:
if 'link' in entry:
link = entry.link
print(link)
# Check if the link starts with "https://jenkins.linuxcontainers.org/job/image-"
if link.startswith("https://jenkins.linuxcontainers.org/job/"):
# Extract the image name from the link
image_name = link.split("/")[-3]
print(image_name)
# Check if the image name is in the release_map
if image_name in release_map:
# Get the corresponding release values (a list)
release_values = release_map[image_name]
# Iterate through the release values and build links
for release_value in release_values:
image_link = f"https://jenkins.linuxcontainers.org/job/{image_name}/lastSuccessfulBuild/architecture=arm64,release={release_value},variant=default/artifact/rootfs.tar.xz"
# Test if the image link is reachable
try:
response = requests.head(image_link)
if response.status_code == 200:
print(f"Image Link is reachable: {image_link}")
image_links.append(image_link)
else:
print(f"Image Link is not reachable: {image_link}")
except requests.exceptions.RequestException as e:
print(f"Error checking image link: {str(e)}")
# Print the list of generated links
for image_link in image_links:
print("Image Link:", image_link)
else:
print("Failed to retrieve or parse the RSS feed.")
```
### Cockpit Personalizations
#### Change port
Cockpit systemd Socket
On servers with `systemd` Cockpit starts on demand via socket activation. To change its port and/or address you should place the following content in the `/etc/systemd/system/cockpit.socket.d/listen.conf` file. Create the file and directories in that path which not already exist. The `ListenStream` option specifies the desired address and TCP port.
```bash
[Socket]
ListenStream=
ListenStream=443
```
or
```bash
[Socket]
ListenStream=
ListenStream=7777
ListenStream=192.168.1.1:443
FreeBind=yes
```
:warning: The first line with an empty value is intentional. `systemd` allows multiple `Listen` directives to be declared in a single socket unit; an empty value in a drop-in file resets the list and thus disables the default port 9090 from the original unit.
The `FreeBind` option is highly recommended when defining specific IP addresses. See the systemd.socket manpage for details.
In order for the changes to take effect, run the following commands:
```bash
sudo systemctl daemon-reload
sudo systemctl restart cockpit.socket
```
#### Change background
scp file to `/usr/share/cockpit/branding/default/bg-plain.jpg`
#### New login box style
`nano /usr/share/cockpit/branding/debian/branding.css`
```css
body.login-pf {
background: url("bg-plain.jpg") no-repeat 50% 0;
background-size: cover;
background-color: #101010;
}
#badge {
width: 80px;
height: 80px;
background-image: url("logo.png");
background-size: contain;
background-repeat: no-repeat;
}
#brand {
font-size: 18pt;
text-transform: uppercase;
}
#brand:before {
content: "${NAME}";
}
#index-brand:before {
content: "${NAME}";
}
/* General styles for the login area */
#login {
background-color: #0a0a23; /* Dark blue background */
color: #c0c0ff; /* Light purple text */
border: 1px solid #5d5db1; /* Blue border */
padding: 20px;
border-radius: 10px;
}
/* Styles for input fields */
#login input[type="text"], #login input[type="password"] {
background-color: #161646; /* Darker blue for input fields */
border: 1px solid #8a8acb; /* Blue border for inputs */
color: #d0d0ff; /* Light purple text for inputs */
}
/* Style for buttons */
#login button {
background-color: #4a4a9b; /* Dark purple background for buttons */
color: #ffffff; /* White text for buttons */
border: none;
}
/* Style for the button on hover */
#login button:hover {
background-color: #6262b0;
}
/* Style for labels */
#login label, #brand, #login-details {
color: #b8b8ff; /* Lighter purple for labels */
}
/* Style for links */
#login a {
color: #b8b8ff; /* Lighter purple for links */
}
/* Style for links on hover */
#login a:hover {
color: #ffffff; /* White for links on hover */
}
/* Style for alert messages */
.pf-v5-c-alert {
background-color: #1a1a3d; /* Dark blue for alert boxes */
color: #e0e0ff; /* Light purple for alert text */
border-left: 5px solid #8a8acb; /* Blue border on the left */
}
#main, #login-details {
/* Semi-transparent background */
background-color: rgba(128, 0, 128, 0.1); /* This is white with 10% opacity */
/* Frosted glass effect */
backdrop-filter: blur(10px);
/* This will contain the blur effect within the element's boundaries */
overflow: hidden;
/* Optionally, add a border to the container */
border: 1px solid rgba(255, 255, 255, 0.25); /* White border with 25% opacity */
/* Rounded corners */
border-radius: 15px; /* Adjust the pixel value to your liking */
}
```
# Orange
inet 10.1.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
## Canary on Orange
Sources:
- https://jasonmurray.org/posts/2022/install-tcanary-ubuntu/
- https://cybergladius.com/build-honeypot-traps-to-secure-your-network/
Installation:
```
apt update && apt -y dist-upgrade && apt -y autoremove
apt -y install python3-dev python3-pip python3-virtualenv python3-venv python3-scapy libssl-dev libpcap-dev samba rsyslog jq
```
`useradd --shell /bin/bash -m -p "$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32 ; echo '')" canary`
`echo 'canary ALL = NOPASSWD: /home/canary/env/bin/opencanaryd --start, /home/canary/env/bin/opencanaryd --restart, /home/canary/env/bin/opencanaryd --dev' > /etc/sudoers.d/canary`
```bash
# Become the canary user.
sudo -u canary -i
cd ~
# Create the new virtual environment.
virtualenv env/
# Drop into the new environment.
. env/bin/activate
# Install the python3 packages that are needed.
python3 -m pip install opencanary scapy pcapy-ng
# Copy the default config for opencanary.
cp /home/canary/env/lib/python3.10/site-packages/opencanary/data/settings.json ~/opencanary.conf
```
- `opencanaryd --copyconfig`
```jsonld
{
"device.node_id": "node34",
"ip.ignorelist": [ ],
"logtype.ignorelist": [ ],
"git.enabled": false,
"git.port" : 9418,
"ftp.enabled": true,
"ftp.port": 21,
"ftp.banner": "FTP server ready",
"http.banner": "Apache/2.2.22 (nix)",
"http.enabled": false,
"http.port": 80,
"http.skin": "nasLogin",
"https.enabled": false,
"https.port": 443,
"https.skin": "nasLogin",
"https.certificate": "/etc/ssl/opencanary/opencanary.pem",
"https.key": "/etc/ssl/opencanary/opencanary.key",
"httpproxy.enabled" : false,
"httpproxy.port": 8080,
"httpproxy.skin": "squid",
"logger": {
"class": "PyLogger",
"kwargs": {
"formatters": {
"plain": {
"format": "%(message)s"
},
"syslog_rfc": {
"format": "opencanaryd[%(process)-5s:%(thread)d]: %(name)s %(levelname)-5s %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout"
},
"file": {
"class": "logging.FileHandler",
"filename": "/var/tmp/opencanary.log"
},
"Webhook": {
"class": "opencanary.logger.WebhookHandler",
"url": "https://discord.com/api/webhooks/",
"method": "POST",
"data": {
"username": "Orange Canary",
"content": null,
"embeds": [
{
"title": "Log",
"description": "```json\n%(message)s```",
"color": 8840968,
"footer": {
"icon_url": "https://canary.tools/static/img/canary_logo_alert.gif",
"text": "Orange Canary Logging system"
},
"author": {
"name": "Orange Canary",
"icon_url": "https://canary.tools/static/img/canary_logo_alert.gif"
}
}
]
},
"status_code": 204,
"headers": {
"Content-Type": "application/json"
}
}
}
}
},
"portscan.enabled": true,
"portscan.ignore_localhost": false,
"portscan.logfile":"/var/log/kern.log",
"portscan.synrate": 5,
"portscan.nmaposrate": 5,
"portscan.lorate": 3,
"portscan.ignore_ports": [ ],
"smb.auditfile": "/var/log/samba-audit.log",
"smb.enabled": true,
"mysql.enabled": true,
"mysql.port": 3306,
"mysql.banner": "5.5.43-0nix0.14.04.1",
"ssh.enabled": true,
"ssh.port": 22,
"ssh.version": "SSH-2.0-OpenSSH_5.1p1 Nix-4",
"redis.enabled": false,
"redis.port": 6379,
"rdp.enabled": false,
"rdp.port": 3389,
"sip.enabled": false,
"sip.port": 5060,
"snmp.enabled": true,
"snmp.port": 161,
"ntp.enabled": false,
"ntp.port": 123,
"tftp.enabled": false,
"tftp.port": 69,
"tcpbanner.maxnum":10,
"tcpbanner.enabled": false,
"tcpbanner_1.enabled": false,
"tcpbanner_1.port": 8001,
"tcpbanner_1.datareceivedbanner": "",
"tcpbanner_1.initbanner": "",
"tcpbanner_1.alertstring.enabled": false,
"tcpbanner_1.alertstring": "",
"tcpbanner_1.keep_alive.enabled": false,
"tcpbanner_1.keep_alive_secret": "",
"tcpbanner_1.keep_alive_probes": 11,
"tcpbanner_1.keep_alive_interval":300,
"tcpbanner_1.keep_alive_idle": 300,
"telnet.enabled": false,
"telnet.port": 23,
"telnet.banner": "",
"telnet.honeycreds": [
{
"username": "admin",
"password": "$pbkdf2-sha512$19000$bG1NaY3xvjdGyBlj7N37Xw$dGrmBqqWa1okTCpN3QEmeo9j5DuV2u1EuVFD8Di0GxNiM64To5O/Y66f7UASvnQr8.LCzqTm6awC8Kj/aGKvwA"
},
{
"username": "admin",
"password": "admin12345"
}
],
"mssql.enabled": false,
"mssql.version": "2012",
"mssql.port":1433,
"vnc.enabled": false,
"vnc.port":5000
}
```
`vi /etc/samba/smb.conf`
```
[global]
workgroup = CorpNet.loc
server string = CorpNet
netbios name = CorpNetFile8
dns proxy = no
log file = /var/log/samba/log.all
log level = 0
max log size = 100
panic action = /usr/share/samba/panic-action %d
#samba 4
server role = standalone server
#samba 3
#security = user
passdb backend = tdbsam
obey pam restrictions = yes
unix password sync = no
map to guest = bad user
usershare allow guests = yes
load printers = no
vfs object = full_audit
full_audit:prefix = %U|%I|%i|%m|%S|%L|%R|%a|%T|%D
full_audit:success = pread_recv pread_send
full_audit:failure = none
full_audit:facility = local7
full_audit:priority = notice
[CorpNetFiles]
comment = CorpNetFiles
path = /home/canary/smb_share
guest ok = yes
read only = yes
browseable = yes
```
```
mkdir /home/canary/smb_share
chmod 444 /home/canary/smb_share
chown canary:canary /home/canary/smb_share
echo '$FileCreateMode 0644' >> /etc/rsyslog.conf
echo 'local7.* /var/log/samba-audit.log' >> /etc/rsyslog.conf
touch /var/log/samba-audit.log
```
## NAS QNAP SS-439-Pro (EOL) i686
### Hardware
Atom 1.6Ghz CPU
2 GB RAM
4x 1TB SSD WD Red NAS SSD 2.5 Inch SATA
1x 120GB SSD Kingston A400 SSD 2.5 Inch SATA (eSATA + USB) for root
### FileSystem
| Disk | Quantity | Filesystem | Storage | Share | Usage |
| ---- | -------- | ---------- | ------- | ----- | ----- |
| 120GB SSD | 1 | ext4 | 120 | N/A | Debian 12 root |
| 1TB SSD | 2 | ZFS Mirror | 1TB | NFS4.2 | Data, Photos, Personal backups|
| 1TB SSD | 1 | ZFS Disk | 1TB | NFSv4.2 | CT/VM drives, data volumes, ISO, templates, K3S volume data|
| 1TB SSD | 1 | XFS | 1TB | N/A | SPARE |
### Management GUI (CockPit)
- Install Debian 12 i368 (bookworm)
- Some prerequisites `apt install -y tuned wget curl lm-sensors`
- Install [Cockpit](https://cockpit-project.org)
```bash
. /etc/os-release
echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" > \
/etc/apt/sources.list.d/backports.list
apt update
apt install -t ${VERSION_CODENAME}-backports cockpit
```
- Get basic add-ons AIO (or one-by-one below)
```bash!
sudo apt install -y cockpit-storaged cockpit-networkmanager cockpit-packagekit cockpit-sosreport
```
- Install Storage management
`sudo apt install -y cockpit-storaged`
- Install Network/Firewall management
`sudo apt install -y cockpit-networkmanager`
- Install SoftwareUpdates check
`sudo apt install -y cockpit-packagekit`
- Install Diagnostic report
`sudo apt install -y cockpit-sosreport`
- Install [45Drives file sharing](https://github.com/45Drives/cockpit-file-sharing) to set up NFS and Samba shares
```bash
curl -LO https://github.com/45Drives/cockpit-file-sharing/releases/download/v3.3.4/cockpit-file-sharing_3.3.4-1focal_all.deb
sudo apt install -y ./cockpit-file-sharing_3.3.4-1focal_all.deb
```
- (Optional) Install [45Drives Navigator](https://github.com/45Drives/cockpit-navigator) file browser
```bash
wget https://github.com/45Drives/cockpit-navigator/releases/download/v0.5.10/cockpit-navigator_0.5.10-1focal_all.deb
sudo apt install -y ./cockpit-navigator_0.5.10-1focal_all.deb
```
- (Optional) Install sensors
```bash
wget https://github.com/ocristopfer/cockpit-sensors/releases/latest/download/cockpit-sensors.tar.xz && \
tar -xf cockpit-sensors.tar.xz cockpit-sensors/dist && \
sudo mv cockpit-sensors/dist /usr/share/cockpit/sensors && \
rm -r cockpit-sensors && \
rm cockpit-sensors.tar.xz
```
- (Optional) Install Storage benkmarks
```bash
wget https://github.com/45Drives/cockpit-benchmark/releases/download/v2.1.0/cockpit-benchmark_2.1.0-2focal_all.deb
sudo apt install -y ./cockpit-benchmark_2.1.0-2focal_all.deb
```
### ZFS support on i386
#### Install prereqs
`sudo apt install -y linux-headers-$(uname -r) zfsutils-linux zfs-dkms zfs-zed autoconf carton build-essential checkinstall`
#### Install & create package
##### Install znapzend as debian package
This is a requirement for the zfs-manager by 45Disks and does not exist as i386 package, so we build that ourselves.
Check out latest version https://github.com/oetiker/znapzend/releases/
```
ZNAPVER=0.23.1
wget https://github.com/oetiker/znapzend/releases/download/v${ZNAPVER}/znapzend-${ZNAPVER}.tar.gz
tar zxvf znapzend-${ZNAPVER}.tar.gz
cd znapzend-${ZNAPVER}
./configure --prefix=/opt/znapzend-${ZNAPVER}
sudo make
sudo mkdir -p '/opt/znapzend-0.23.1/lib'
sudo mkdir -p '/opt/znapzend-0.23.1/share/man/man1'
sudo checkinstall
sudo dpkg -i znapzend_0.23.1-1_i386.deb
```
add `export PATH=$PATH:/opt/znapzend-0.23.1/bin` to root/.bashrc
##### Install `cockpit-zfs-manager`
```
wget https://github.com/45Drives/cockpit-zfs-manager/releases/download/v1.3.1/cockpit-zfs-manager_1.3.1-1focal_all.deb
sudo dpkg -i ./cockpit-zfs-manager_1.3.1-1focal_all.deb
```
Fix [Pool replication marked with "Error" even though no error is reported](https://github.com/45Drives/cockpit-zfs-manager/issues/17
)

`sudo cpan Mojo::Base Mojo::IOLoop::ForkCall`
#### ZFS Performance
`sudo vim /etc/modprobe.d/zfs.conf`
Based on the table below:
sys: I386 RAM: 1024MB
- vfs.zfs.arc_min "128M"
- vfs.zfs.arc_max "128M"
or
sys: I386 RAM: 2048MB
- vfs.zfs.arc_min "400M"
- vfs.zfs.arc_max "400M"
In my case for 2048MB RAM
```bash
options zfs zfs_arc_min="419430400" # 400M in bytes
options zfs zfs_arc_max="419430400" # 400M in bytes
```
`sudo update-initramfs -u -k all`
```
sudo reboot
## OR ##
sudo systemctl reboot
```
Check if configuration has been set
`cat /sys/module/zfs/parameters/zfs_arc_min`
and
`cat /sys/module/zfs/parameters/zfs_arc_max`
### NFS share permissions
vmctbackups: `sec=sys,rw,crossmnt,no_subtree_check,async,all_squash,anonuid=0,anongid=0`
vmcstore: `sec=sys,rw,crossmnt,no_subtree_check,async,all_squash,anonuid=0,anongid=0`
### Build udisk2 with iscsi driver in Debian
```
sudo apt-get update
sudo apt-get install build-essential autoconf libtool libglib2.0-dev libgudev-1.0-dev libgudev-1.0-cil-dev libgudev-1.0-doc libgudev-1.0-dbg libudisks2-dev libudisks2-0
```
`git clone https://github.com/storaged-project/udisks.git`
```
cd udisks
./autogen.sh
```
`./configure --with-iscsi`
`make`
`sudo make install`
`sudo systemctl restart udisks2`
Package
`mkdir -p debian/{DEBIAN,usr/bin}`
`vi debian/DEBIAN/control` with package metadata. For example:
```plaintext
Package: your-package-name
Version: 1.0
Architecture: amd64
Maintainer: Your Name <your.email@example.com>
Description: Short description of your package
Longer description of your package.
```
Copy Files:
Copy the compiled binaries and necessary files to the package directory:
bash
cp compiled-binary debian/usr/bin/
Set Permissions and Ownership:
Ensure that the files have the correct permissions and ownership:
bash
chmod 755 debian/usr/bin/compiled-binary
Build the Package:
Build the Debian package using the dpkg-deb command:
bash
dpkg-deb --build debian
This will create a .deb package in the parent directory.
Install and Test:
Install the newly created Debian package:
bash
sudo dpkg -i your-package-name.deb
Test the installation to ensure that the package works as expected.
Clean Up:
Remove any temporary files or directories created during the packaging process.
# WIP (Work in progress section)
## Nixos LXC container
Sources:
- https://mtlynch.io/notes/nixos-proxmox/
Setup:
### Get template
Get the latest proxmox container template and download to proxmox CT Templates. Change `24.11` with the latest version
```
https://hydra.nixos.org/job/nixos/release-24.11/nixos.lxdContainerImage.x86_64-linux
```
### Set up container
Set up container from the `proxmox` server console.
#### Set up the configuration
Make sure to set the correct storage location and filename and configure othere parameters as desired
```
# Where the template file is located
TEMPLATE_STORAGE='local'
# Name of the template file downloaded from Hydra.
TEMPLATE_FILE='nixos-24.11-amd64.tar.xz'
# Name to assign to new NixOS container.
CONTAINER_HOSTNAME='hs-lan-proxy'
# Which storage location to place the new NixOS container.
CONTAINER_STORAGE='section9-vmctstore'
# How much RAM to assign the new container.
CONTAINER_RAM_IN_MB='512'
# How much disk space to assign the new container.
CONTAINER_DISK_SIZE_IN_GB='8'
```
### Create the container
```
pct create pct create "$(pvesh get /cluster/nextid)" \
--arch amd64 \
"${TEMPLATE_STORAGE}:vztmpl/${TEMPLATE_FILE}" \
--ostype unmanaged \
--description nixos \
--hostname "${CONTAINER_HOSTNAME}" \
--net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0 \
--storage "${CONTAINER_STORAGE}" \
--memory "${CONTAINER_RAM_IN_MB}" \
--rootfs ${CONTAINER_STORAGE}:${CONTAINER_DISK_SIZE_IN_GB} \
--unprivileged 1 \
--features nesting=1 \
--cmode console \
--onboot 1 \
--start 1
```
### Get ssh access with keys from Codeberg
```
CODEBERG_USERNAME='antonionardella' # Replace this.
mkdir -p ~/.ssh && \
curl "https://codeberg.org/${CODEBERG_USERNAME}.keys" > ~/.ssh/authorized_keys
```
### Apply working configuration
```
{
modulesPath,
config,
pkgs,
...
}: let
hostname = "nixos";
user = "nixos";
timeZone = "Europe/Rome";
defaultLocale = "en_US.UTF-8";
in {
imports = [
# Include the default lxc/lxd configuration.
"${modulesPath}/virtualisation/lxc-container.nix"
];
boot.isContainer = true;
networking.hostName = hostname;
environment.systemPackages = with pkgs; [
vim
];
services.openssh.enable = true;
time.timeZone = timeZone;
i18n = {
defaultLocale = defaultLocale;
extraLocaleSettings = {
LC_ADDRESS = defaultLocale;
LC_IDENTIFICATION = defaultLocale;
LC_MEASUREMENT = defaultLocale;
LC_MONETARY = defaultLocale;
LC_NAME = defaultLocale;
LC_NUMERIC = defaultLocale;
LC_PAPER = defaultLocale;
LC_TELEPHONE = defaultLocale;
LC_TIME = defaultLocale;
};
};
users = {
mutableUsers = false;
users."${user}" = {
isNormalUser = true;
hashedPassword = "$6$kFDsWaM/aJIAEOFO$dg82DfA31sVjmwRrUj7.gIQb71p/P5WJu/fujBrXp1QHOzOkqEdylOio0..nuckxoqCO4blZ7TRIttUsRmrUF0" ; # Set the password manually with `passwd` or Nix-Secrets or use `mkpasswd -m sha-512`
extraGroups = ["wheel"];
};
};
# Enable passwordless sudo.
security.sudo.wheelNeedsPassword = false;
# Suppress systemd units that don't work in LXC.
systemd.suppressedSystemUnits = [
"dev-mqueue.mount"
"sys-kernel-debug.mount"
"sys-fs-fuse-connections.mount"
];
nix.settings.experimental-features = ["nix-command" "flakes"];
system.stateVersion = "24.11";
}
```
### Apply configuration and reboot the container
```
nix-channel --update && \
nixos-rebuild switch --upgrade && \
echo "install complete, rebooting..." && \
poweroff --reboot
```
## Authentik (LXC - Debian - PostgreSQL)
Sources:
- https://goauthentik.io/docs/installation/
- https://codeberg.org/antonionardella/authentik-bare-metal
Install:
- `git clone https://codeberg.org/antonionardella/authentik-bare-metal.git`
- `cd authentik-bare-metal`
- `sh prepare.sh`
- `sh install.sh`
## Zammad Debian 12 Elasticsearch OSS Postgresql16
- `apt update && apt full-upgrade -y && reboot`
- `apt install -y curl apt-transport-https gnupg wget sudo`
- Add Zammad repo
```
curl -fsSL https://dl.packager.io/srv/zammad/zammad/key | \
gpg --dearmor | tee /etc/apt/trusted.gpg.d/pkgr-zammad.gpg> /dev/null
```
```
echo "deb [signed-by=/etc/apt/trusted.gpg.d/pkgr-zammad.gpg] https://dl.packager.io/srv/deb/zammad/zammad/stable/debian 12 main"| \
tee /etc/apt/sources.list.d/zammad.list > /dev/null
```
- Install Elasticseach OSS 7.10.2
```
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.10.2-amd64.deb
dpkg -i elasticsearch-oss-7.10.2-amd64.deb
```
- Fix locale
```
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales
# In the window select en_US.UTF-8 UTF-8 and define it as default
```
- Install postgresql 16 and fix template
```
apt update
apt install -y postgresql-16
sudo -u postgres psql
UPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
DROP DATABASE template1;
CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UTF8'
lc_collate='en_US.utf8' lc_ctype='en_US.utf8';
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1';
\c template1
VACUUM FREEZE;
exit
```
- Set timezone
`dpkg-reconfigure tzdata`
- Install Zammad
```
apt install -y zammad
# Set the Elasticsearch server address
zammad run rails r "Setting.set('es_url', 'http://localhost:9200')"
# Build the search index
zammad run rake zammad:searchindex:rebuild
```
- Setup NGINX
### Migrate OTRS 6 to Zammad
Sources:
- https://www.znuny.com/en/add-ons/znuny4otrs-repository
#### Download opm plugins to your PC
`https://addons.znuny.com/public/Znuny4OTRS-Repo-6.0.77.opm`
`https://ftp.zammad.com/otrs-migrator-plugins/Znuny4OTRS-ZammadMigrator-6.0.7.opm`
#### Install
Go to `Admin` - `Package Manager`
- Upload and Install `Znuny4OTRS-Repo-6.0.77.opm`
- Upload and Install `Znuny4OTRS-ZammadMigrator-6.0.7.opm`
### Start Zammad and migrate
# REMOVED
## K3S (LXC - Debian)
Sources:
- https://betterprogramming.pub/rancher-k3s-kubernetes-on-proxmox-containers-2228100e2d13
-https://kevingoos.medium.com/installing-k3s-in-an-lxc-container-2fc24b655b93
- https://dev.to/gvelrajan/configure-local-kubectl-to-remote-access-kubernetes-cluster-2g81
Install K3S on Proxmox RPI4 hosts
- Set up priviledged CT (Uncheck `Unpriviledged`)
- Disk `4GB`
- CPU `3`
- RAM `3072`
- SWAP `0`
- Stop container
- Network (Static IP from the DHCP server)
- **ON THE HOST SHELL**
- `nano /etc/pve/lxc/CTID.conf`
```
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"
```
- Start container
- `pct push <CTID> /boot/config.txt /boot/config-$(uname -r)` #`/boot/config-$(uname -r)` instead of `/boot/config.txt` for distros other than dietpi
- **IN THE CONTAINER SHELL**
- *(Optional)* Install `nano` and `curl`
- `nano /usr/local/bin/conf-kmsg.sh`
```
#!/bin/sh -e
if [ ! -e /dev/kmsg ]; then
ln -s /dev/console /dev/kmsg
fi
mount --make-rshared /
```
- `nano /etc/systemd/system/conf-kmsg.service`
```
[Unit]
Description=Make sure /dev/kmsg exists
[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/local/bin/conf-kmsg.sh
TimeoutStartSec=0
[Install]
WantedBy=default.target
```
- Enable the service
```
chmod +x /usr/local/bin/conf-kmsg.sh
systemctl daemon-reload
systemctl enable --now conf-kmsg
```
:warning: Repeat the above for all nodes
- Set up the control node
`curl -sfL https://get.k3s.io | sh -s - server --disable servicelb --disable traefik --write-kubeconfig-mode 644`
- Get node-token for worker nodes
`cat /var/lib/rancher/k3s/server/node-token`
- Add worker node to cluster
`curl -fsL https://get.k3s.io | K3S_URL=https://<control node ip>:6443 K3S_TOKEN=<cluster token> sh -s`
K3S is running in Debian LXC on Proxmox 8
### Portainer in K3S
Sources:
- https://theselfhostingblog.com/posts/setting-up-a-kubernetes-cluster-using-raspberry-pis-k3s-and-portainer/
Installation
- kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
### NFS Storage on K3S
Sources:
- https://computingforgeeks.com/configure-nfs-as-kubernetes-persistent-volume-storage/
Prerequisites:
- K3S nodes, install `nfs-common open-iscsi jq`
- NFS share named `k3s` on `10.0.0.99`
Deploy NFS Subdir External Provisioner in Kubernetes cluster
Make sure your NFS server is configured and accessible from your Kubernetes cluster. The minimum information required to connect to the NFS server is hostname/IP address and exported share path.
Install Helm on your system.
`curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash`
Confirm helm is installed and working.
```bash
$ helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}
```
Also check if your kubectl command can talk to kubernetes cluster.
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane 16d v1.24.6
node01 Ready <none> 16d v1.24.6
node02 Ready <none> 15d v1.24.6
node03 Ready <none> 15d v1.24.6
```
The nfs-subdir-external-provisioner charts installs custom storage class into a Kubernetes cluster using the Helm package manager. It will also install NFS client provisioner into the cluster which dynamically creates persistent volumes from single NFS share.
Let’s add helm chart repo.
`helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/`
Create a namespace called nfs-provisioner:
```bash!
$ kubectl create ns nfs-provisioner
namespace/nfs-provisioner created
```
Set NFS Server
```bash
NFS_SERVER=10.0.0.99
NFS_EXPORT_PATH=/k3s
```
Deploy NFS provisioner resources in your cluster using helm
```bash!
helm -n nfs-provisioner install nfs-provisioner-01 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$NFS_SERVER \
--set nfs.path=$NFS_EXPORT_PATH \
--set storageClass.defaultClass=true \
--set replicaCount=1 \
--set storageClass.name=nfs-01 \
--set storageClass.provisionerName=nfs-provisioner-01
```
Check Helm configuration parameters for NFS provisioner to see what values you can set while installing. See this deployment with extra settings.
Command execution output:
```bash!
NAME: nfs-provisioner-01
LAST DEPLOYED: Thu Dec 1 00:17:47 2022
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
```
NFS client provisioner is deployed in default namespace.
```bash
$ kubectl get pods -n nfs-provisioner
NAME READY STATUS RESTARTS AGE
nfs-provisioner-01-nfs-subdir-external-provisioner-58bcd67f5bx9mvr 1/1 Running 0 3m34s
```
The name of the storageClass created is `nfs-01`
```bash
$ kubectl get sc -n nfs-provisioner
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-01 nfs-provisioner-01 Delete Immediate true 9m26s
```
Now it is possible to call the NFS volume from deplyoments with:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <yourdeployment>-nfs-pvc
spec:
storageClassName: nfs-01
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi # indicate how much space you want
---
[...]
spec:
containers:
- name: name
image: image
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data
name: <name>-data
volumes:
- name: <name>-data
persistentVolumeClaim:
claimName: <yourdeployment>-nfs-pvc
```
(OPTIONAL)
Installing Multiple Provisioners
It is possible to install multiple NFS provisioners in your cluster to have access to multiple nfs servers and/or multiple exports from a single nfs server.
Each provisioner must have a different storageClass.provisionerName and a different storageClass.name. For example:
```bash
helm install -n nfs-provisioner second-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/second/exported/path \
--set storageClass.name=second-nfs-client \
--set storageClass.provisionerName=k8s-sigs.io/second-nfs-subdi
```
# Backup 4G connection
## RPi
Set up RPi with dietpi
## 4G modem connection
- Install usb_modeswitch
- `sudo apt install usb_modeswitch`
- Go to USB modem homepage and configure APN
- http://192.168.32.1
- Iliad APN: Iliad
- Save SIM PIN
- `sudo nano /etc/network/interfaces.d/usb0`
```
config interface 'wan'
option proto 'dhcp'
option ifname 'usb0'
```
- reboot
Telnet:
- Username: root
- Password: zte9x15
## Garage bot service
`sudo systemctl --force --full edit garagebot.service`
```
[Unit]
Description=Telegram Garagebot script
After=multi-user.target
[Service]
Type=simple
User=ziofester
ExecStart=/home/ziofester/garage-remote-telegram-bot/venv/bin/python /home/ziofester/garage-remote-telegram-bot/bot.py
WorkingDirectory=/home/ziofester/garage-remote-telegram-bot
Restart=on-abort
[Install]
WantedBy=multi-user.target
```
## Meshtastic with SX1262
Sources:
- https://meshtastic.org/docs/hardware/devices/linux-native-hardware/
- https://www.youtube.com/watch?v=91ULi9DWgds
Hardware:
- RPi3
- [Waveshare SX126X (SPI version)](https://www.waveshare.com/sx1262-lorawan-hat.htm) !! ALWAYS CONNET ANTENNA!!
Installation:
- Get packages
- `sudo apt install libgpiod-dev libyaml-cpp-dev libbluetooth-dev openssl libssl-dev libulfius-dev liborcania-dev libulfius2.7`
- Get latest package from [GitHub releases](https://github.com/meshtastic/firmware/releases)
- `wget https://github.com/meshtastic/firmware/releases/download/v2.4.2.5b45303/meshtasticd_2.4.2.5b45303_armhf.deb`
- [Configuration](https://meshtastic.org/docs/hardware/devices/linux-native-hardware/#configuration)
```
sudo raspi-config nonint set_config_var dtparam=spi on /boot/config.txt # Enable SPI
sudo raspi-config nonint set_config_var dtparam=i2c_arm on /boot/config.txt # Enable i2c_arm
# Ensure dtoverlay=spi0-0cs is set in /boot/config.txt without altering dtoverlay=vc4-kms-v3d or dtparam=uart0
sudo sed -i -e '/^\s*#\?\s*dtoverlay\s*=\s*vc4-kms-v3d/! s/^\s*#\?\s*(dtoverlay|dtparam\s*=\s*uart0)\s*=.*/dtoverlay=spi0-0cs/' /boot/config.txt
# Insert dtoverlay=spi0-0cs after dtparam=spi=on if not already present
if ! sudo grep -q '^\s*dtoverlay=spi0-0cs' /boot/config.txt; then
sudo sed -i '/^\s*dtparam=spi=on/a dtoverlay=spi0-0cs' /boot/config.txt
fi
```
- Enable serial port for GPS/GNSS chip on the SX1262 (mine does not have it, so not needed)
```
sudo raspi-config nonint do_serial_hw 0
sudo raspi-config nonint do_serial_cons 1
```
- Reboot
- Install package
`sudo apt install ./meshtasticd_{version}arm64.deb`
- Change config
`sudo nano /etc/mestasticd/config.yaml`
```
Lora:
Module: sx1262 # Waveshare SX126X XXXM
DIO2_AS_RF_SWITCH: true
CS: 21
IRQ: 16
Busy: 20
Reset: 18
Logging:
LogLevel: info # debug, info, warn, error
# TraceFile: /var/log/meshtasticd.json
Webserver:
Port: 443 # Port for Webserver & Webservices
RootPath: /usr/share/doc/meshtasticd/web # Root Dir of WebServer
General:
MaxNodes: 200
MaxMessageQueue: 100
```
# Useful commands and configs
## Resize LXC disk
- Stop container
- `pct resize <CTID> rootfs <newsize, e.g. 1G>`
- Start container
## K3S Volume Mounts
```
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-test-claim
```
K3S Vaultwarden Deploy Example with NFS and 30081 Nodeport
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultwarden-nfs-pvc
spec:
storageClassName: nfs-01
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultwarden
spec:
selector:
matchLabels:
app: vaultwarden
template:
metadata:
labels:
app: vaultwarden
spec:
containers:
- name: vaultwarden
image: vaultwarden/server:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data
name: vaultwarden-data
volumes:
- name: vaultwarden-data
persistentVolumeClaim:
claimName: vaultwarden-nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: vaultwarden-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
protocol: TCP
selector:
app: vaultwarden
```
## Remove node from proxmox cluster
`pvecm delnode `
## ARC SIZE
I pulled these values out of the zfstuner extension for freenas7 by daoyama. For those unfamiliar with this extension it would write values to loader.conf for you based on the amount of ram you have installed or the amount of ram you wanted to dedicate just to ZFS. These values were created with freebsd 7 in mind but, I believe them the be valid for freebsd 8.
sys: I386 RAM: 512MB - vm.kmem_size "340M", vfs.zfs.arc_min "60M", vfs.zfs.arc_max "60M"
sys: I386 RAM: 1024MB - vm.kmem_size "512M", vfs.zfs.arc_min "128M", vfs.zfs.arc_max "128M"
sys: I386 RAM: 1536MB - vm.kmem_size "1024M", vfs.zfs.arc_min "256M", vfs.zfs.arc_max "256M"
sys: I386 RAM: 2048MB - vm.kmem_size "1400M", vfs.zfs.arc_min "400M", vfs.zfs.arc_max "400M"
sys: 64 RAM: 2GB - vm.kmem_size "1536M", vfs.zfs.arc_min "512M", vfs.zfs.arc_max "512M"
sys: 64 RAM: 3GB - vm.kmem_size "2048M", vfs.zfs.arc_min "1024M", vfs.zfs.arc_max "1024M"
sys: 64 RAM: 4GB - vm.kmem_size "2560M", vfs.zfs.arc_min "1536M", vfs.zfs.arc_max "1536M"
sys: 64 RAM: 6GB - vm.kmem_size "4608M", vfs.zfs.arc_min "3072M", vfs.zfs.arc_max "3072M"
sys: 64 RAM: 8GB - vm.kmem_size "6656M", vfs.zfs.arc_min "5120M", vfs.zfs.arc_max "5120M"
sys: 64 RAM: 12GB - vm.kmem_size "10752M", vfs.zfs.arc_min "9216M", vfs.zfs.arc_max "9216M"
sys: 64 RAM: 16GB - vm.kmem_size "14336M", vfs.zfs.arc_min "12288M", vfs.zfs.arc_max "12288M"
sys: 64 RAM: 24GB - vm.kmem_size "22528M", vfs.zfs.arc_min "20480M", vfs.zfs.arc_max "20480M"
sys: 64 RAM: 32GB - vm.kmem_size "30720M", vfs.zfs.arc_min "28672M", vfs.zfs.arc_max "28672M"
sys: 64 RAM: 48GB - vm.kmem_size "47104M", vfs.zfs.arc_min "45056M", vfs.zfs.arc_max "45056M"
## Unpriviledged NFS mount
Sources:
- https://forum.proxmox.com/threads/tutorial-mounting-nfs-share-to-an-unprivileged-lxc.138506/
Setup:
- Access your nodes shell
- `Proxmox > Your Node > Shell`
- Create a mounting point for the share
- `mkdir /mnt/computer2/downloads`
- Edit fstab so that the share mounts automatically on reboot
- Open: `nano /etc/fstab`
- Add: `192.168.1.20:/mnt/user/downloads/ /mnt/computer2/downloads nfs defaults 0 0`
- Save
- Mount the share
- Reload systemd: `systemctl daemon-reload`
- Mount shares: `mount -a`
- Add the pointing point to your LXC
- Open: `nano /etc/pve/lxc/101.conf`
- Add: `mp0: /mnt/computer2/downloads/,mp=/downloads`
- Save
- Start the LXC
- Update the LXC user's permissions
- `groupadd -g 10000 lxc_shares`
Note: I think you can use whatever group name you want as long as you use again in the next step.
- `usermod -aG lxc_shares root`
Note: Your username is probably root, but substitute for whatever user you want to configure permissions for.
- Reboot the LXC
- Verify permissions
- Create a file in your mountpoint: touch foobar
- Attempt to delete foobar from another machine.
- If successful, you should be done.