# resonance infra interactions ## [res infra is managed by rubberneck](https://github.com/grenade/rubberneck/tree/main/manifest/resonance). if you're familiar with [cloud-init](https://cloud-init.io/), then you already know everything about how rubberneck works, except that rubberneck instances are polled very (~5 min) frequently to validate that actual configuration matches manifest definitions (cloud-init is designed to run once, on first boot. rubberneck runs constantly, forever). if it doesn't, it is rectified. one of the benefits of this system is that often, you can log in to a system via ssh and accidentally break things with a hammer and rubberneck will come along a few minutes later and tidy up after you. you can also use some other orchestration tool like ansible or puppet on instances that are managed by rubberneck and rubberneck won't get its panties twisted. it doesn't really care because it's designed to be more flexible than those alternatives and to coexist with or tolerate them. your friendly sysadmin knows that using rubberneck means he can let engineers go sudo crazy on production systems without worrying about what they might change or break. that's not to say you can't wreck a rubberneck managed system. you can. it's just ridiculously easy for rubberneck to fix it and lock you out afterwards. ## the res server room ![dimitar talev racks](https://lh3.googleusercontent.com/pw/AP1GczMXbidF9c5gPe301PIKYAGnQgmV_R3fUX8udhJ3-603Ns6EAcH7vKoBPJsX3HlvvZlroEHd8iTf-sbCm07qyIN5RNwRDrKnYBiPRhNkD6jWMGecZe4lSrmM0rv3MljwHSakmnDtkSL3DkC5XDY_2RsU3Q=w1500-h1992-s-no) our servers sit in a small server room in the bulgarian mountains. they have ups power banks that will let them survive short power outages and they have multiple redundant network connections over both 1gbps fibre and 5g. they are monitored by [uptime-kuma](https://status.res.fm), [prometheus](https://prometheus.res.fm/targets) and [grafana](https://grafana.res.fm). which are deployed to [hetzner metal](https://robot.hetzner.com) instances in finland and germany. hostnames, with several exceptions, are taken from characters in *the hitchhikers guide to the galaxy* by douglas adams. ## how to access and manipulate a res instance - add your ed25519 ssh public key to the resonance authorised keys config for each node you want to access. ie: for allitnils, enter your key below `#engineering` here: https://github.com/grenade/rubberneck/blob/main/manifest/resonance/allitnils/manifest.yml#L48 - connect via ssh from command line: ```bash= ssh -p $ssh_port resonance@$fqdn ``` or in your ssh config (~/.ssh/config) ```= Host allitnils Hostname a1.i.res.fm IdentityFile ~/.ssh/id_ed25519 Port 52104 User resonance Host colin Hostname a2.i.res.fm IdentityFile ~/.ssh/id_ed25519 Port 52105 User resonance # ... more host configs ``` then just (for `a2.i.res.fm`): ```bash= ssh colin ``` - tail the resonance node logs from the command line, when ssh'd to the node: ```bash= journalctl -f -u resonance-node.service ``` - get the resonance node logs for a given utc time period: ```bash= journalctl \ --unit resonance-node.service \ --since "2025-04-30 05:30:00" \ --until "2025-04-30 06:00:00" ``` - search the resonance node logs from the command line: ```bash= journalctl --unit resonance-node.service --grep "my search pattern" ``` - stop, start, restart the node: ```bash= sudo systemctl stop resonance-node.service sudo systemctl start resonance-node.service sudo systemctl restart resonance-node.service ``` - view the node, miner, telemetry start args: ```bash= cat /etc/systemd/system/resonance-node.service cat /etc/systemd/system/resonance-miner.service cat /etc/systemd/system/substrate-telemetry-shard.service ``` - check the node version: ```bash= /usr/local/bin/resonance-node --version ``` - deploy a new node binary to all integration testnet nodes, resetting the chain db on each and waiting until all nodes have been cleared of data before restarting all nodes (assumes you have node aliases configured for ssh): ```bash= # local path to the binary being deployed local_bin_path=~/git/resonance-network/backbone/target/release/resonance-node # specify target node list declare -a nodes=() nodes+=( allitnils ) nodes+=( colin ) nodes+=( rai ) # stop and reset all nodes for node in ${nodes[@]}; do ssh ${node} ' # stop node service sudo systemctl stop resonance-node.service; # delete chain db sudo rm -rf /var/lib/resonance/chains; ' # deploy binary to node rsync \ --archive \ --compress \ --rsync-path='sudo rsync' \ ${local_bin_path} \ ${node}:/usr/local/bin/resonance-node done # start all nodes for node in ${nodes[@]}; do ssh ${node} sudo systemctl start resonance-node.service done ``` - maintain the testnet faucet bot - ssh connect to host trillian ```bash= ssh -p 52200 resonance@trillian.thgttg.com ``` - stop, start, restart the faucet bot ```bash= sudo systemctl stop resonance-telegram-faucet.service sudo systemctl start resonance-telegram-faucet.service sudo systemctl restart resonance-telegram-faucet.service ``` - tail the faucet bot logs ```bash= journalctl -f -u resonance-telegram-faucet.service ``` - view faucet bot start args ```bash= cat /etc/systemd/system/resonance-telegram-faucet.service ``` - view faucet bot config ```bash= cat /var/lib/resonance/faucet-telegram-client/.env ``` ## res nodes ### integration testnet - a1.i.res.fm (aka: allitnils) - ip: 10.9.1.104 - ssh port: 52104 - a2.i.res.fm (aka: colin) - ip: 10.9.1.105 - ssh port: 52105 - a3.i.res.fm (aka: rai) - ip: 10.9.1.215 - ssh port: 52215 ### live testnet - a1.t.res.fm (aka: bob) - ip: 10.9.1.201 - ssh port: 52201 - a2.t.res.fm (aka: effrafax) - ip: 10.9.1.203 - ssh port: 52203 - a3.t.res.fm (aka: frootmig) - ip: 10.9.1.202 - ssh port: 52202 ### infra - trillian.thgttg.com (gitlab runners, testnet faucet bot) - ip: 10.9.1.200 - ssh port: 52200 - blart.thgttg.com (resonance subsquid) - ip: 10.9.1.107 - ssh port: 52107 - krikkit.thgttg.com (schrodinger subsquid) - ip: 10.9.1.106 - ssh port: 52106