# Esplora deployment #### Lead: Ian ## Description: Esplora is a blockchain explorer based on electrum but also allowing for tor-gap capabilities. This is useful for anti-correlation during broadcasting of transactions. The API also makes it useful for separation of wallet/server for functionality in other apps as well as during sweeping. #### Problem: The blockstream.info server leaves a single point of failure for those relying on it's service if went offline. For the sake of redundancy, several instances of Esplora should be deployed throughout different regions for optimal fault tolerance as well as load distribution, so everyone isn't relying on a single service. Deploying an instance on a cloud based VPS can be expensive if you instantiate a bundled instance with a large enough hard drive. Thus, the processor needs to be separated from the storage. This can be done by instatiating a smaller linode instance and combining it with a separate storage volume large enough to sync a full node and store the indexes. #### Links: Blockstream: https://blockstream.info/ Github: https://github.com/icculp/esplora API Docs: https://github.com/icculp/esplora https://blog.keys.casa/electrum-server-performance-report/ ## TODO: * Convert run script into service * Automate service into setup script * Learn how to autoscale instance on AWS * * Waiting for mainnet instance to sync and verify running * Modification of UI display, header/footer, etc. * Try first in testnet vm * Integrate this into setup scripts? * How to make this easy for replication? * Append exports to flavors/bitcoin-mainnet/config.env * Prompt user for values to populate * Figure out tor integration. Maybe seek Gorzad assistance? * Add startup script for run in screen of docker container, to ensure service restarts upon power cycle * Write readme/tutorial to deploy linode instance, create volume, and run stackscript for non-technical users. Discuss resource requirements * PR for blockstream esplora readme, issue for docker hub version incompatibility. Possibly improve readme to build from blockstream github rather than docker hub? ## Notes & Progress ### initial Testing deployment of testnet; failed on first linode instance, ran out of hard drive space; Tried in a debian based vm with 512gb HD. Takes forever for electrum to sync fully; kept getting dredded "Esplora is currently unavailable, please try again later." was not syncing after days; moved VM to 1 tb SSD over usb3.0, then disabled pre_caching, address_search, and using lightmode; finally synced after over 24 hours, taking about 250gb for the testnet. Currently trying 4gb linode instance ($20) + 1.2 TB (1200GiB) volume with mainnet, same disabled features as above. ### 6/24 http://192.53.166.171:8080/ started last night, 6/23, now ~12 hours later syncing roughly 20% complete ### 6/26 6:20pm ~43% sync complete. I think that progress bar is based on blocks, but blocks can have variable number of transactions inside, so some blocks index faster than others; so 0-20% was much quicker with fewer tx's per block, but indexing is taking longer for subsequent blocks? Created install/setup script for docker, screen, mounting of volume, git+clone, run script https://github.com/icculp/esplora/blob/master/setup.sh script to run docker mainnet in screen working: https://github.com/icculp/esplora/blob/master/run_mainnet.sh ### 7/11/2021 10pm ![](https://i.imgur.com/6yWRQVj.png) This has been stuck on .9999xxx % verificationprogress for at least 4 days. At this point I don't believe that harddrive read/write speed is the bottleneck. I ran iostat, iotop, and top to view the hd read/writes for each drive, then each process, and view overall resource usage. It appears bitcoind and electrs are competing for CPU, always maxed out, with intermittent large reads, and then small writes. I wonder if upgrading to a larger linode instance to increase computation resources during this phase would be worthwhile, and then downgrade afterwards once whatever it's spending so much time computing is complete. ![](https://i.imgur.com/99pIewm.png) ![](https://i.imgur.com/gx6ddbQ.png) ### 7/12/2021 1:50pm Trying to work on modifying the header/footer/site description variables in local testnet vm. Tried exporting variables via bash, no effect after exiting the docker instance and rerunning. Tried sending them via the -e flag during docker run command, no effect. Tried modifying them in flavors/bitcoin-testnet/config.env with no effect. Tried modifying them in the travis.yml file in the root folder, no effect. Tried rebuilding the docker image after setting various variables to different values, to see which (if) one sticks. Curious while this rebuilds if the database will need to be repopulated, or if it'll resync with the current data. After rebuilding, synced fairly quickly as the data was already there, but once again, no effect. Added vars to build.sh, and rebuilding. Once again, no effect. Okay, after `running npm run dist` and then rebuilding the docker image, the variables I put into build.sh reflected in the site. I had API_URL exported in bash and it reflected during the npm build process. Trying to determine if the testnet api was going to blockstream, not sure how to check this? Inspecting the page and viewing the console/sources I couldn't really tell. Might need to ask someone To get tor working, need to build tor docker image, see if onion address provided? If so, update mainnet docker run to include -e ONION_ADDRESS flag. Write into FOOT_HTML gui flag With onion logo, Copy from blockstream? Building tor image failing, instead pulling from blockstream/tor:latest. When trying to run, can't read from torrc. Attempted placing tor dir and config file in ~/esplora/tor, but still saying unable to read despite sudo. rather than -d to detach, then have to find id and rm every time, remove -d, and add --rm flags. Also github should have docker run sudo docker run --rm --name hidden_service blockstream/tor:latest tor -f ~/esplora/tor/torrc ![](https://i.imgur.com/HHKnVHK.png) So unable to read, went back to looking to build; seems as if the build is using the wrong verson ![](https://i.imgur.com/GVYow6U.png) Version should be 0.4.3.7 instead of 0.4.2.7 ![](https://i.imgur.com/KwkmYMp.png) ![](https://i.imgur.com/1aAOLLf.png) Seems as if this is up to date, but not in docker hub https://github.com/Blockstream/bitcoin-images/blob/master/tor/Dockerfile ### 7/17/2021 So esplora still showing as unavailable; thinking that something is wrong with the default settings. Troubleshooting, looks like api is unavailable. Comparing to local testnet vm curl localhost:8084/api works, but localhost:8084/testnet/api is moved permanently ![](https://i.imgur.com/8aRLeO1.png) api showing bad gateway. Trying to curl locally, still bad. ![](https://i.imgur.com/GAEMOOq.png) ![](https://i.imgur.com/uimzowd.png) seems like it's looking for :8080/api but that's not being served there, it's at :8080/mainnet/api. This is confusing because testnet vm :8084/testnet/api is unavailable, but :8084/api is available. I'm confused ![](https://i.imgur.com/uRfhMGZ.png) :8084/api took me here ![](https://i.imgur.com/wMIfUhN.png) ![](https://i.imgur.com/UFanbxM.png) so mainnet is looking for :8080/api but it's not there, it's at :8080/mainnet/api. looking at the config.env for mainnet vs testnet, tesnet has BASE_HREF set, but mainnet does not. Tried adding that to mainnet, rerunning `npm run dist && sudo docker build -t esplora .`, but this screwed it up even more, site wasn't serving at all, none of the assets or icons found, nothing served. Have to go, when I get back, try again without base_href var, but giving mainnet config.env API_URL to point to '/mainnet/'?? ### 7/23/2021 Previous notes were nonsense, redirects to Page Not Found. Ignore above. Esplora with block storage volume still indexing, and it's taking FOREVER. I'm starting to think it's impossible to sync without a SSD. After initial bitcoind sync, which took over a week, electrs will sometimes be a single thread reading around 100mb/s, or close to the read speed of the drive. Then it'll branch out into multiple threads with very slow read speed. Not sure what's happening there: ![](https://i.imgur.com/Vn3vYYH.png) ------ ![](https://i.imgur.com/MHCaWlO.png) when single thread it's just reading and skipping blocks... ![](https://i.imgur.com/v7Jc36T.png) but once it's branched into multiple threads it shows this: ![](https://i.imgur.com/ueequ9o.png) Not sure what it's doing exactly... Probably the same as this issue: https://github.com/Blockstream/esplora/issues/322 I tried playing with the parameters for starting electrs. Modified the runit script found in contrib and rebuilding with tx-limit set to something higher. This had no effect. I believe this is more to do with number of tx's shown than limit during indexing, not sure. I'm a noob but I finally discovered I could enter the docker instance and run commands locally. `docker ps` to get id, then `docker exec -lt <id> /bin/bash` or something like that. Rather than modifing the runit file to rebuild the docker, I modified the service file at /etc/service/electrs/run and killed the process to try various parameters `ps auxf | grep electrs` to get pid then `kill -9 pid`. Service will restart with parameters in run file. Tried adding what are flags in the romanz/electrs repo such as index-batch-size, but it seems the blockstream/esplora version of electrs doesn't accept those flags as it would error out and wouldn't run. I also tried removing --lightmode, but electrs wouldn't recognize the index, and I didn't want to rebuild from scratch... I'm wondering if lightmode is slowing down the indexing process. I thought it would just make lookups slower once sync was done... reading through old issues lead me to understand problem a little better. Missed some links but these are good to read: https://github.com/Blockstream/esplora/issues/156 https://bitcoin.stackexchange.com/questions/91746/configure-esplora-to-point-to-the-bitcoind-servers-ip-address/91859#91859 https://github.com/Blockstream/electrs https://gitter.im/romanz/electrs?at=5b4c8c34866e0c6b15b1760e https://gitter.im/romanz/electrs https://github.com/romanz/electrs/blob/master/doc/config_example.toml https://github.com/romanz/electrs/blob/p2p/doc/usage.md https://github.com/Blockstream/esplora/issues/312 compaction process https://github.com/facebook/rocksdb/wiki/Manual-Compaction https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016190.html https://electrumx.readthedocs.io/en/latest/ https://docs.rs/electrs/0.8.10/electrs/config/struct.Config.html https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016190.html ## Must read: https://blog.keys.casa/electrum-server-performance-report/ Trying a new instance on top of SSD, and trying without lightmode or precaching disabled. Created a new linode account with the $100 credit with alternative email, user icculp-csu. Tried to create a 64gb instance, which comes with 1.2tb drive. (Linode instanes come with SSD bundled so believe it'll sync much faster). Wasn't able to, had to get authortization via support ticket. Was able to start with 8gb instance and upgrade it after support responded, which was fairly quick, maybe within an hour I think. Setup script for new instance: https://gist.github.com/icculp/7c6f16aa060ab188d20a758baccde82b Once up and running and fully synced, will try moving it to block storage, then downgrading instance. Hopefully this works. If so, will later try starting with local docker instance on SSD over usb3.0 to save cost on initial syncing, then upload data to block storage via SCP. So far after about 16 hours bitcoind is almost done syncing (96%). So far this is light speed faster. We'll see how long electrs takes to index would love to figure this out: https://github.com/tikv/rust-prometheus https://docs.rs/prometheus/0.12.0/prometheus/ romanz version comes with according to docs, but esplora version isn't working, even after opnening ports in docker run `-p 4224:4224`. Wish I could get a status on electrs, how far along indexing is, how much left to do... While doing those above I was also looking into this PR: https://github.com/BlockchainCommons/Bitcoin-Standup-Scripts/pull/15 This is missing much for esplora deployment. Esplora can be run without docker, but I'd have to figure out setup/install for nginx and ufw which are missing from script. If one wants to use docker, then docker install needs to be added to script esplora docker puts everything including bitcoin block data and electrs in esplora/data_bitcoin_mainnet. But frustratingly, this would normally be found in .bitcoin, which is appended in other parts of the script. Modifying the script to work with docker data would be a pain. Should I try to deploy without docker? Let bitcoind and electrs and tor run on native host? Tor part I'm unfamiliar with.... Since esplora is IObound on the reads, I wonder if it's just reading from bitcoin data or if the electrs database needs to be on the SSD as well until indexing is complete?? Less than 24 hours later it's almost filled the 600gb of this instance, need to upgrade already (at 93%). The other has been running for a month and barely at 600gb... Man, I've had this esplora instance running on block storage (HDD) for nearly a month and it's barely at ~600gb. I started an instance in a new account with the $100 credit so I could run it without block storage and in less than 24 hours it's already filled 600gb. Something about the blockstream version of electrs is really limited by the read speed of the drive. I've tried diving into the docker and modifying the electrs service with various flags I see in the docs for the romanz version and the original, but it won't run if it's not any of the flags resizing linode to 64gb instance is taking much longer than before. I wonder if they are coping the entire disk. ### 7/24/2021 64gb instance finished syncing and is fully funcional. Took less than 36 hours. Seems like HDD read speeds really limit the blockstream version of electrs, and it'll just take too long to fully sync and index. Copied the data to a block storage volume and it still works. This is still connected to a large VPS though, so need to retest with a smaller VPS. Also tried remote mounting vis sshfs `sshfs root@45.56.69.50:/root/ /mnt/testing` to mount the ssd dir of the large instance to a smaller instance remotely. This isn't working, it's refusing to connect to bitcoind at localhost:8332. Rocksdb is also haveing IO errors. Tried saving the docker image to .tar and importing on other instance, same issue.`docker images` to get image name then `docker save -o /dirwanted/imagename.tar imagename` then `docker image -i imagename.tar imagetoname`. I wonder if it's related to it being the primary disk. Curious if this could work when it's a secondary drive, or if it's more complicated to figure out.. Cloned block storage volume and trying with a 2gb linode instance. Rebuilding the docker image took much longer. Bitcoind was ~100 blocks behind at this point and it's taking forever to catch back up. Maybe the 1 CPU is not enough? Going to let this run for a while and see in a few hours if it can catch up. If not, will try a 4gb instance with 2 cpus. Uh oh, problem on large instance that's running from block storage... It's not updating to latest block. Unable to connect to bitcoind ![](https://i.imgur.com/Gkk5jPn.png) Looks like there are electrs issues related to this: https://github.com/romanz/electrs/issues/9 https://github.com/romanz/electrs/issues/199 This is also relevant: https://gitter.im/romanz/electrs Restarting docker container updates to latest block. Wonder if electrs or bitcoind is deleting the cookie file upon blocknotify? Or if the daemon upon restart is restarting with different config? Hmm I just watched it update to latest block. So negates above conjecture. Will let it run for a while. on 2gb node with clone blockstorage, bitcoind finally finished syncing. Electrs I guess is now trying to catch up, barely treading water it seems ![](https://i.imgur.com/hgMsQK0.png) an hour later, falling further behind ![](https://i.imgur.com/e6LpVRR.png) another 45 minutes, farther behind ![](https://i.imgur.com/7CCj4BL.png) couple hours later, falling farther and father behind ![](https://i.imgur.com/1NQfKGd.png) going to upgrade to 4gb with 2 cpu's and see if it can catch up Watching in iotop I'm seeing reads up to 1000mbps, so this can't be a HDD, even though it's block storage. Did they assign me a SSD as block storage? Does this mean I can't rely on this to work in other block storage volumes? This is frustrating.... Maybe they're automatically assigning me faster hard drives because I've created large/expensive linode instances... Sigh. ![](https://i.imgur.com/YPQoftl.png) ![](https://i.imgur.com/TbDknWy.png) spikes very high then goes back down. This is where the other instance with the slower block storage is getting bogged down to kb reads in the 15 threads... Ran a command I found to gauge the read speed, seems the block storage is much slower than the native drives. Not sure why these reads are spiking so high... I'm very confused. ![](https://i.imgur.com/mhuDYSM.png) If this is using block storage, then what's wrong with the other instance where those 15 thread reads are in kb/s? Is this a bug with one of the flags? The node with the super slow reads is using lightmode, precaching disabled, and no address search. Is one of these causing the super slow reads? I can't stop using lightmode without reindexing. Maybe I'll try removing the other two flags. Bad news. I wasn't actually on block storage on esplora-csu, I ran my script file from the mounted storage, but the filepath was pointing to the data on the main drive. Big sigh. ### 7/25/2001 Discovered that right after I deleted the data from the primary ssd, about to downsize the linode instance. So I copied the data back from block storage. Woke up around 4am noticed it was done copying, reran the docker until it synced (very quickly) then copied the changed files back to block storage. Reran script from correct dir and can confirm it is working on block storage now. The 2gb instance that was falling behind, upgraded to 4gb, but still was stuck on precaching. It looked like it was just rerunning the precache script over and over. The data on the drive was increasing slightly then going back to where it was. I added the NO_PRECACHE=1 flag to the docker run and it started working fairly quickly. Downsizing back to 2gb to test. ### 7/26/2021 I'm nearing the end of the $100 credit on this new account. As of this morning both instances (a large 96gb instance, but with data moved to block storage, and a smaller 2gb instance with block storage) were fully synced and working. I started working on downgrading the 96gb instance. At this point, the 2gb instance started falling behind, is no longer working. Is this because congestion has increased with the volatility over the weekend, thus more new transactions to index and it's falling behind? Or did downgrading the large expensive node do something to somehow lower the priority of all my instances? Looking at y charts it actually looks like the # of transactions per block has decreased... Upgrading the 2gb back up to 4gb and it synced quickly. I wonder if the docker container just needed to be restarted, or if it needs the exptra cpu and ram? Going to downgrade again and keep an eye on it... Took about 45 minutes to resync. 1.7/2g ram and swap maxed out. 1:00 pm cst. 1:13pm, 1.9/2g ram. Web UI working but showing 2 blocks behind. I'm seeing now how crucial understanding the requirements of the intended user can be in what I thought was going to be a simple deployment of esplora. Now that I've played around with esplora for a while and have a couple of esplora instances working, and have a better understanding of the resource requirements and difference between the blockstream version of electrs versus the romanz version, I'm wondering now who the intended user of this esplora deployment for Blockchain Commons is supposed to be. If it's for personal use and needs to be deployed on lowest(cheapest) resources, I'm wondering why an individual would need such a massive index (.6 to 1.2TB depending on lightmode enabled or not) that the blockstream/eletrs creates if bitcoind can be queried on the fly(though slightly slower), in which case maybe the romanz version of electrs (<100gb index) would make more sense for a low-cost deploymeny of esplora with a less resource intensive version of electrs api. If the intended user is enterprise, requiring the high volume indexing that blockstream/electrs provides, then would a 2gb instance even keep up? Would it then need to be tested under stress with high volume queries? If the web UI sometimes seems to fall a block or two behind while it's catching up the index, would this fail the intended enterprise user? If instead the idea is just to have a remote backup of the entire index blockstream/electrs creates, then maybe a 2gb instance is fine. Several hours later, 2gb instance is so far behind that electrs api has stopped. Looks like electrs is behind bitocind by a few blocks (indexing ...807 while latest block is ...809). Watching htop, memory is increasing to max out the 2gb, then tx's flushed to db, and repeat. Probably 2gb memory isn't enough for blockstream/electrs. ### 7/27/2021 Working overnight on 4gb instance. Tried precaching again but still running out of memory. Going to upgrade to 8gb and see if it can handle it. Precaching working on 8gb instance. Memory usage around 6.5gb. Ran for a few hours before downgrading back to 4gb and disabling prechaching. BC logo: `curl -o bc_logo.png 'https://www.blockchaincommons.com/images/Borromean-rings_minimal-overlap(256x256).png'` `cp bc_logo.png static/bitcoin-mainnet/img/icons/menu-logo.png` Reset after restarting docker container. css is navbar-brand and looking for logo at /img/icons/menu-logo.png. Need to find what creates app.js and modify that, add to setup script Can copy to esplora/www/img/icons/menu-logo.png and rebuild docker and npm run dist then it persists. like: `curl -o bc_logo.png 'https://www.blockchaincommons.com/images/Borromean-rings_minimal-overlap(256x256).png'` `cp bc_logo.png /mnt/disks/sdb/esplora/www/img/icons/menu-logo.png` Could maybe modify bitcoin-mainnet-explorer.conf.in which creates /data/.bitcoin.conf to add dbcache param. Maybe 6000 for 8gb instance? echo dbcache=6000 | sudo tee -a /dir/filenameof.conf.in Working on evaluating amazon s3 bucket storage, as it could be considerably cheaper than linode block storage. Have to create aws account, create s3 bucket, create IAM user, add permission group, and add policy. I selected default s3fullaccess policy. `sudo apt-get install s3fs awscli -y` `echo accesskey:secretaccesskey > ~/.passwd-s3fs` `chmod 600 ~/.passwd-s3fs` Have to attach the policy to users and groups. Played around with attaching and reattaching. Using user api key rather than root. Should probably disable root access. This I think did it: sudo s3fs -f -d esplora:/ /mnt/s3 -o passwd_file=/root/.passwd-s3fs -o allow_other -o url=https://s3-us-east-2.amazonaws.com Have to add -o url=... modify to be https://s3-us-... from bucket overview '''Also, since fuse is being used for docker, it was necessary to modify /etc/fuse.conf and uncomment `user_allow_other` to enable it. Added these flags during troubleshooting, not sure if they are necessary once fuse.conf modified... `sudo s3fs -f -d esplora /mnt/s3 -o passwd_file=~/.passwd-s3fs -o allow_other` ''' ### 7/28/2021 the s3 bucket disconnected. Trying google cloud with compute engine and standard disk. For 2cpu, 8gb ram, and 1250gb hd is around $100. Have one starting from scratch, another going to copy from linode. Mounting is a pain. follow https://cloud.google.com/compute/docs/disks/add-persistent-disk `sudo lsblk sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb sudo mkdir -p /mnt/disks/sdb sudo mount -o discard,defaults /dev/sdb /mnt/disks/sdb sudo chmod a+w /mnt/disks/sdb sudo blkid /dev/sdb` (copy uuid to below, replce MOUNTOPTION with nofail, add to /etc/fstab) `echo "UUID=cc79a348-1e6e-4515-92a2-5bcdf0e3f03f /mnt/disks/sdb ext4 discard,defaults,nofail 0 2" | sudo tee -a /etc/fstab` now to copy `scp -r root@198.58.124.178:/mnt/esplora_test2/esplora /mnt/disks/sdb/` on instance running from scratch, bitcoind sync is going very slow. Moving into the docker and modifying /data/.bitcoin.conf to add `dbcache=1024` seems to help speed it up a bit For google cloud instances, need to open up port 8080. Firewall settings under VPC network. opened ingress and egress for 8080 for 0.0.0.0/0 I think 2 cpu's during bitcoind syncing is slowing it down, one cpu stuck at 70% while the other is at 4%. Maybe because it's the shared core instance. Going to try upgrading instance to 8 cpu and see what it looks like. Definitely moving faster, but it has 8cpu and 8gb ram so not sure which was more important, as I also modified debcache to be 5000. ### 7/29/2021 Had copied from linode working instance via scp to a google cloud vm but it was failing on run. Tried rsync to make sure everything copied, but it failed on a symlink. Apparently symlinks don't copy if the linked dir isn't empty without the --force flag. `rsync -a --force root@198.58.124.178:/mnt/esplora_test2/esplora /mnt/disks/sdb` google vps starting from scratch overnight still only 39% (it was at 32% last night). Increasing dbcache didn't help much apparently. Guess 2cpu and 8gb ram not enough, maybe 2cpu not enough. I know I mentined that before but it was 2 shared cores, thought 2 standard cores would be fine. Moved to 16c 16g and it was moving fast, within a couple hours went up to 56%. Going to try 4cpu/16g to see if it moves similarly fast (this is lowest option to get 4cpu and can determine if 4cpu is enough to speed it up). Maybe also dbache at 6000 needs more than 8gb ram? Hope it wasn't paging. `tail data_bitcoin_mainnet/bitcoin/debug.log` to view bitcoind status without docker run output ### 8/3/01 on my phone but thinking about how to speed up deployment cheaply... the gc scratch instance is still building the index. constant reads from hd across 15 threads, around 20mbps. using 13.5/16 g ram. of the 4 cpus they don't look maxed out. i was thinking maybe I can compress the entire esplora dir once fully indexed and then pin compressed archive to sia. make a script to reload periodically. others could use for simple deployment without ssd. other option, gc instance with 500gb ssd. 1500hdd. put bitcoind on ssd. problem before was figuring out setup without docker. but maybe I could create symbolic link to dir on ssd and use docker like usual??? going to try that when I get home in a few hours symlink didn't work, kept getting error about mkdir failed, file already exists. Tried mount --bind and this seems to be working. At first I thought data was being copies to both dirs, but on /root it was going up more than the hdd on /mnt. bitcoin syncing from scratch so later when it's reading I'll see if it's reading at ssd speed `sudo mount --bind --verbose /root/bitcoin /mnt/disks/sdb/esplora/data_bitcoin_mainnet/bitcoin` ### 8/7/2021 added mount to /etc/fstab: `/root/bitcoin /mnt/disks/sdb/esplora/data_bitcoin_mainnet/bitcoin none bind 0 0` Have had two instances in gc still trying to index from scratch, and neither is done. instance 6 has bitcoind on ssd, see mount above. Modified /etc/fstab with mount on boot. Average reads still not what I'd think is ssd speed, but it's way faster than instance 4 where everything is on HDD. Have tried playing around with ram and cpu, but cpu usage over time seems to be low on both. Figured out I can install monitoring agent through GC to easily see resource utlization over time. On both memory is maxed out, but cpu is low. Instance 6 has 8cpu/8gb ram and bitcoind on ssd. Notice IOPS much higher than instance 4, but consistent read speed is still below 100. 6 ![](https://i.imgur.com/dDKljfB.png) ![](https://i.imgur.com/d63xV7U.png) ![](https://i.imgur.com/ux7EzwE.png) 4 ![](https://i.imgur.com/gkhnV8k.png) ![](https://i.imgur.com/q0tTcre.png) ![](https://i.imgur.com/cke5JVh.png) Instance 4 has 4cpu/16gb ram, ram maxed out, IOPS and read very low. Turns out read speed of the drive may not be the underlying limitation with cloud based VPS's. It's IOPS among distributed cloud storage... Basically, the data is stored in a distributed manner and thus IOPS is limited by network bandwidth, and cloud providers allocate more or less throughput based on the size of the instance you're running, or the region or other factors. More CPU's tends to equal higher bandwidth. I'm now wondering if running this on direct hardware rather than in the cloud makes more sense rather than need large cloud instances to get higher iops bandwidth. Can GC allow me to adjust network io upwards? https://www.reddit.com/r/googlecloud/comments/kr7u08/are_gcp_persistent_disks_too_slow_for_doing_large/ gc does compare these storage options to standard HDD at 7200RPM https://cloud.google.com/compute/docs/disks/performance ### 8/9/2021 https://github.com/BlockchainCommons/Learning-Bitcoin-from-the-Command-Line/blob/master/14_1_Verifying_Your_Tor_Setup.md regex the onion address out and script into setup for ui or paste from bitcoin-cli getnetworkinfo. add cli into script? simpler: `sudo cat data_bitcoin_mainnet/bitcoin/debug.log | grep "tor: Got service ID " | cut -d ' ' -f 9 | tail -1` `export ONION_V3=` to flavors/bitcoin-mainnet/config.env won't be available until after screen starts. Rebuild after screen? is onion address persistent through docker restarts? ### 8/12/2021 onion address is persistent Started a new large linode isntance with longview tracking enabled so I could see the peak IOPS, hd space, and memory. Created script to track hd space ``` #!/usr/bin/env bash while [ 1 ] do echo "$(date)" >> hd_size df --human-readable /dev/sda >> hd_size sleep 5m done ``` Interestingly, HD space went up to 1.4TB and then dropped down to 857GB, and is now increasing. Is this the compaction process? Why's it still increasing then? Does it build the index and compact as it goes along? ![](https://i.imgur.com/9szs7k7.png) CPU's shot up to 800% (maxing 8 cores) but most of the time stays below 4. I'm a bit confused about ram, because top/htop shows little utilization, and here it shows low use but maxed out cache equalling the amount of ram in the instance. I don't know what this means? In GC I don't see cache metrics like that... Is something about linode caching making it super duper fast? Is it something about the nvme drives? I don't understand ![](https://i.imgur.com/ECCjLr6.png) Looking to convert docker start script into service: https://unix.stackexchange.com/questions/236084/how-do-i-create-a-service-for-a-shell-script-so-i-can-start-and-stop-it-like-a-d could only pull onion v2 address from debug.log. Need to figure out how to get v3 address. ### 8/13/2021 large instance with ssd ![](https://i.imgur.com/Nk0JVIX.png) ![](https://i.imgur.com/911iJq2.png) hd_size log https://hackmd.io/kgD79y6uS_67PqhoV-ianw Service for systemctl: ``` [Unit] Description=Starts mainnet esplora in screen under root After=network.target [Service] Type=forking TimeoutStartSec=1 Restart=on-failure RestartSec=5s ExecStart=/mnt/disks/sdb/esplora/run_mainnet.sh [Install] WantedBy=multi-user.target ``` in `/etc/systemd/system/mainnet.service` ``` sudo systemctl daemon-reload sudo systemctl start mainnet ``` Works on restart, and killing of process. Added to setup script. ### 8/16/2021 I haven't been posting notes lately. Trying to figure out kubernetes deployment on gke or eks. Got gitlabrunner installed with autoscaling for runner on amazon last night, now need to figure out ci/cd pipeline. Also, HDD deploy on linode finally finished ![](https://i.imgur.com/stZmaAY.png) ![](https://i.imgur.com/Kt84FU2.png) ![](https://i.imgur.com/J2EhKLn.png) Looks like max space was 1551GB. Also looks like two periods of filling/compaction. Large RAM 96GB) still took 3-4 days with IOPS limited to 500/s, but maybe promising for kubernetes autoscaling deploy without SSD. Killing instance. This gc instance has been stuck it seems for days, maybe because it didn't have enough memory (16gb). This has a separate ssd for bitcoind, and the remainingdrive is 800gb used which seems like what the other instances have used for electrs. Restarting docker to see if it finishes quickly ![](https://i.imgur.com/p2QPe2X.png) Going to kill this instance. Not sure why it's stuck, HD space has increased and compacted twice similar to other instances, never finishes whatever it's doing. expensive to keep running. ![](https://i.imgur.com/QuFMTbU.png) ![](https://i.imgur.com/RNvRsGd.png) ![](https://i.imgur.com/uV3auVy.png) ![](https://i.imgur.com/xQaJICw.png) Disk throughput very low, so don't know what it's doing... Is the bind mount for bitcoind on separate drive screwing it up? ### 8/20/21 In archiving the block data and electrs index, I was intitally thinking to do the entire esplora directory, but considered it would include the onion address and other things that might be specific to a single setup, such as the UI and public tor keys and so on, so I'm only going to archive the data_bicoin_mainnet dir and include it as an option in the setup script. `sudo tar czvf /mnt/disks/sdc/data_bitcoin_mainnet-08-20-21.tar.gz /mnt/disks/sdb/esplora/data_bitcoin_mainnet` ### 8/22/2021 lightmode and precaching disabled ![](https://i.imgur.com/QgyJu3Q.png) ![](https://i.imgur.com/PGq6kqU.png) ![](https://i.imgur.com/IDJT3FT.png) ![](https://i.imgur.com/pElRXLT.png) full size ![](https://i.imgur.com/xZM6GZg.png) full tar ![](https://i.imgur.com/30yG38O.png) lightmode size ![](https://i.imgur.com/yAbKdTD.png) lightmode tar ![](https://i.imgur.com/LsmwmUs.png) For the time it took to tar, which was like half a day, it might make sense to just host the data since it only saves about 100GB for each, or 5-10%. https://olivermarshall.net/how-to-upload-a-file-to-google-drive-from-the-command-line/ 8/29/2021 Untar trickier... untars with previous filepath appended... ![](https://i.imgur.com/G8VPMI1.png) moved and it's working ![](https://i.imgur.com/HylQH1J.png) Need to figure out how to get tar to ignore filepath, eithre during compression or during extraction Rebuilding index running out of space after untar, even after only a few days behind. Volume from scratch is fine, but untarred version expanded to fill empty space rapidly. resizing volume ![](https://i.imgur.com/sJyOmpN.png) noticed log mention running out of ram, with only 4gb it wasn't going to sync once it got too far behind, and the hd space ran out because it was paging to disk. maybe if it's fully synced, can immediately downgrade to 4gb and it'll hopefully catch back up. But probably better to run on 8gb anyways, which will allow for precaching popular addresses feature to work. gcloud setup create instance, enable http/https, add standard persistent disk (cheapest) ![](https://i.imgur.com/yIAoSIx.png) ![](https://i.imgur.com/iOzUgt5.png) ![](https://i.imgur.com/pwN5XD6.png) ![](https://i.imgur.com/RqTzbHT.png) allow cloud shell run, hit enter to execute prefilled command then authorize to run api call, y to install ssh keys mount disk https://cloud.google.com/compute/docs/disks/add-persistent-disk `vmstat` for cache ![](https://i.imgur.com/Bs3Pj1K.png) ![](https://i.imgur.com/BTsAuRK.png) allow for ingress and egress, two separate rules ![](https://i.imgur.com/AbfOAFY.png) Can either add tag to instance, or all instances in network if only running this on ![](https://i.imgur.com/ppcEmXO.png) ![](https://i.imgur.com/EH9UzEy.png) name whatever you want. I used in8080 didn't need http or https.. Might need to open 8333 for onion service? when electrs is indexing you can gauge the progress by seeing which blk***.dat file it's at ![](https://i.imgur.com/w2dW4PP.png) in the bitcoin directory containing the block data, you can see how far up the blk..dat files go, up to 2709 in my dir. ![](https://i.imgur.com/MYSZj4P.png) ## 9/4/2021 installing tor on main host and adding onion_v3 as onion-location header, injecting html header via flavors as well as onion_v3 var for UI. To get onion-location working, need HTTPS otherwise tor browsers won't recognize the header and automatically provide the ".onion" available popup