# HRO deployment
## Shrinking an active Linux partition on OVH
Don't delete the bios and boot, esp partitions. delete only the main old OS partition. Follow the installation process From next step (Install NixOS on OVH VPS)
## Install NixOS on OVH VPS
1. Backup your data.
2. Reboot into rescue mode with the control panel.
3. Connect to KVM with the control panel.
You can also SSH into the host with login credentials from KVM.
4. Download NixOS image.
Likely you won’t have enough disk space in rescue mode so use ramfs.
```shell
mkdir /mnt/ramdisk
mount -t ramfs ramfs /mnt/ramdisk
cd /mnt/ramdisk
wget https://channels.nixos.org/nixos-21.11/latest-nixos-minimal-x86_64-linux.iso
```
5. Load image with qemu.
[https://community.ovh.com/en/t/installing-operating-system-from-custom-image-on-ovh-vps-centos-8-tutorial/3561](https://community.ovh.com/en/t/installing-operating-system-from-custom-image-on-ovh-vps-centos-8-tutorial/3561)
```shell
apt install qemu-kvm
qemu-system-x86_64 -netdev type=user,id=mynet0 -device virtio-net-pci,netdev=mynet0 -m 2048 -enable-kvm -drive index=0,media=disk,if=virtio,file=/dev/sdb -cdrom latest-nixos-minimal-x86_64-linux.iso -boot d
```
6. Connect to the installer inside QEMU guest.
On your local machine, forward the VNC port locally.
```shell
ssh -o "UserKnownHostsFile=/dev/null" -NL 5900:localhost:5900 root@${YOUR_HOST}
```
Then connect to `vnc://localhost` with your VNC client. “Screen Sharing.app” on macOS somehow didn’t work for me.
7. Install as usual inside VNC.
Follow instructions in [https://nixos.org/manual/nixos/stable/index.html#sec-installation](https://nixos.org/manual/nixos/stable/index.html#sec-installation). Use `vda` whenever `sda` is referenced.
```shell
sudo -i
parted /dev/vda -- mklabel msdos
parted /dev/vda -- mkpart primary 1MiB -8GiB
parted /dev/vda -- mkpart primary linux-swap -8GiB 100%
mkfs.ext4 -L nixos /dev/sda1
mkswap -L swap /dev/sda2
mount /dev/disk/by-label/nixos /mnt
swapon /dev/vda2
nixos-generate-config --root /mnt
vi /mnt/etc/nixos/configuration.nix
# uncomment boot.loader.grub.device and set it to "/dev/vda"
nixos-install
```
8. Shutdown the qemu guest.
```shell
shutdown -h now
```
9. Reboot **with the control panel** into non-rescue mode.
10. Now that we are no longer in rescue mode, change `/dev/vda` back to `/dev/sda`.
```shell
vi /etc/nixos/configuration.nix
```
11. Done.
## Generating GridSync JSON blob
### zkap_unit_name
We currently use `"GB-month"`
### zkap_unit_multiplier
for the PS _staging_ grid, we have one storage server and issue batches of 50000 tokens per purchase. This works out to to 50000 "MB/months" or 50 "GB/months" (when multiplied by 0.001 -- the default zkap_unit_multiplier). On the PS _production_ grid, we also issue batches of 50000 token per purchase _but_ we have 5 servers and distribute shares across them such that 3 are needed to reconstruct the file. Accordingly, this means that, even though customers get 50000 actual tokens "under the hood", because it costs more tokens to distribute the redundant shares, they're only effectively getting 30 "GB/months" worth of tokens/storage-time (i.e., `50000 * (3 / 5)` -- which gives us `30000`). The `zkap_unit_multiplier` of 0.0006, in this case, is used to convert a) the total number of actual tokens into b) the user-facing display of effective "GB/months" available in the UI -- 50000 * 0.006 = ~30 GB/months. So with the PS production grid token count and erasure coding params, the `zkap_unit_multiplier` can be determined like so:
`(token_count * (shares_needed / shares_total)) / (token_count / 0.001)`
With your HRO cloud, likewise, you're issuing 50000 tokens per voucher, but you're distributing shares across 3 servers such that only 1 share is needed to reconstruct the file. This means that, even though users have 50000 tokens, because they're uploading each file 3 times, they're only getting ~16666.6 tokens worth of storage-time (or 16.67 "GB/months"). Running those values through the formula above, we would need to use/set a `zkap_unit_multiplier` of 0.00033 to convert the 50000 users will receive do display the effective value of 16.67 GB/months in the UI.
If that's too low, or if you need to give each user 50 GB/months of effective storage time, you'll need to issue larger batches of tokens (in this case, three times the amount of tokens) to account for the extra uploads you're configuring their clients to perform.
@Chris
### Shares
The `shares-total` parameter dictates exactly how many shares are created when the file is uploaded. The `shares-needed` is the number of shares needed to recreate the entire file. The last parameter, `shares.happy` is used to configure how your shares will be distributed between servers. While the `shares.needed` and `shares.total` parameters deal with “shares”, the `shares.happy` parameter specifies actual servers.
Depending on the total number of Nodes and the expected reliablilty of the network set the values. For examples setting shares needed to `1` means we get slower upload speed but higher reliablity as we will need at least one storage server to be online to get the data back. For HRO cloud I set them as follows:
```
shares-needed 1
shares-happy 2
shares-total 3
```
Related Links:
- [Servers of Happiness
](https://tahoe-lafs.readthedocs.io/en/latest/specifications/servers-of-happiness.html)
### Stroage Header
Use the string from `/var/db/tahoe-lafs/storage/node.pubkey` starting from `v0-`
### Lease information
Probably set the same lease configuration values as PS uses. It kind of makes a difference now but not really since GC isn't turned on server-side (so expired leases don't matter).
@Jean-Paul
```
lease.crawl-interval.mean
lease.crawl-interval.range
lease.min-time-remaining
```
### storage-server-FURL
Get the value from `/var/db/tahoe-lafs/storage/private/storage-plugin.privatestorageio-zkapauthz-v1.furl` from storage servers
### allowed-public-keys
Use `PaymentServer-get-public-key` to generate it.
```shell
nix-shell ./PrivateStorageio/shell.nix
nix-shell ./PaymentServer/shell.nix
cd PaymentServer
nix-build nix/ -A PaymentServer.components.exes.PaymentServer-get-public-key
./result/bin/PaymentServer-get-public-key < PATH_TO_ristretto.signing-key
```
### Template
```JSON
{
"version": "2",
"nickname": "Name of Grid",
"zkap_unit_name": "GB-month",
"zkap_unit_multiplier": digit,
"zkap_payment_url_root": "Payment webpage",
"shares-needed": "1",
"shares-happy": "2",
"shares-total": "3",
"storage": {
"v0-...": {
"nickname": "storage00#",
"storage-options": [
{
"name": "privatestorageio-zkapauthz-v1",
"storage-server-FURL": "pb://{Alphanumeric key}@tcp:{DNS}:8898/{Alphanumeric key}",
"ristretto-issuer-root-url": "Payment Server DNS",
"pass-value": 1000000,
"default-token-count": 50000,
"lease.crawl-interval.mean": 864000,
"lease.crawl-interval.range": 86400,
"lease.min-time-remaining": 0,
"allowed-public-keys": "{Alphanumeric key}="
}
],
"anonymous-storage-FURL": "pb://@tcp:/"
}
}
}
```
# Testing with GridSync without Rebuilding
1. embedded inside the GridSync AppImage You can extract the contents of an AppImage file by running it with the `--appimage-extract` flag.
2. Then, modify the respecitve JSON blob. You'll have to delete persistant runtime configs from previous runs if it exist (preferably just delete what ever is in `~/.config/gridsyc` to start with a fresh gridsync instance)
3. Run `./squashfs-root/AppRun --debug`
4. if you run into an issue and you suspect it's a problem from the backend, check the logs coming from eliot client from GridSync's UI -> `Help` -> `View Debug Information`.
The Relavant logs start from `------ Beginning of Tahoe-LAFS log for <Filtered:GatewayName:1> ------`
5. Use your prefered JSON viewer to format the JSON blob and look for exceptions.
## Upgrading NIXOS (Sometime needs to start the deplyment with a higher kernel version )
You may need to upgrade Nixos if you're using an older nixos installation since WireGuard is not included in older linux kernel.
To upgrade NixOS:
1. As the root user, replace the NixOS channel so it points to the one you want to upgrade to, while ensuring it is named `nixos`:
```shell
nix-channel --add https://nixos.org/channels/nixos-21.05 nixos
```
and update the channel (`nix-channel --update`).
2. As the root user, build your system:
```shell
nixos-rebuild --upgrade boot
```
3. Reboot to enter your newly-built NixOS.
If things go wrong you can reboot, select the previous generation, use `nix-channel` to add the old channel, and then `nixos-rebuild boot` to make the working generation the default; I think it's more reliable to rebuild than to use `nixos-rebuild --rollback`.
## adding SSH keys
```
eval `ssh-agent -s`
ssh-add privateKey
```
the setup of HRO needs `SSH_USER`
```shell
export SSH_USER = root
```