---
[TOC]
---
GUI used was Jetstream2 exosphere: https://jetstream2.exosphere.app/
## Summary info
The base image created below is publicly available as "STAMPS-2023" and includes:
- conda v23.5.2 / mamba v1.4.9
- jupyterlab v3.6.3 in base conda env
- an [anvio-dev](https://anvio.org/install/#5-follow-the-active-development-youre-a-wizard-arry) conda environment
- R v4.3.1 / Rstudio Server (2023.06.1-524) with:
- BiocManager 1.30.21
- remotes 2.4.2
- tidyverse 2.0.0
- phyloseq 1.44.0
- dada2 1.28.0
- decontam 1.20.0
- DESeq2 1.40.2
- tximport 1.28.0
- devtools 2.4.5
- breakaway 4.8.4
- DivNet 0.4.0
- corncob 0.3.1
- speedyseq 0.5.3.9018
- rigr 1.0.5
- tinyvamp 0.0.5.0
- 1,000 GB shared storage in /opt/shared for ref dbs or whatever
After instance creation from the image, detailed at the bottom of this page, these links will get us to common places for a given instance IP:
- jupyter lab: \<IP\>:8000
- rstudio: \<IP\>:8787
- anvio (when interactive interface is running): \<IP\>:8080
---
## Launching initial image we are building ours on
Ubuntu 22.04 (latest)

M3.large for what we're doing here
Instance size: m3.large (16 CPU, 60GB RAM, 60GB disk)

That's all for this, can click Create.
Once launched, want to ssh into it. Can navigate to that instance's page, and use the `ssh` info at the bottom to log in with the passphrase provided there:

---
## Setting up a shared file-system for reference dbs or other wanted stuff
Doing with [manila](https://docs.jetstream-cloud.org/ui/horizon/manila/).
Logged into Horizon as detailed [here](https://docs.jetstream-cloud.org/ui/horizon/login/).
Then followed the steps [here](https://docs.jetstream-cloud.org/ui/horizon/manila/) (make sure to have the coorect project/allocation selected):

Click create.
Manage Rules -> Add Rule:


Click Add.
Back on Project / Share / Shares page, can select the share and see the metadata, key things are "Path", and "Access Key".
Then onto [here](https://docs.jetstream-cloud.org/general/manilaVM/) after logging in below.
---
## After logging in with ssh, first switching to sudo
Being sudo just makes things some things easier to do, but also i think things created as a regular user (in an area like /opt that should be maintained to future instances from this image) don't get maintained. So i've ended on doing everything as sudo and trying to set permissions at the end to not be a problem for users.
```bash
sudo bash
```
### Connecting to fileshare started above
Following [here](https://docs.jetstream-cloud.org/general/manilaVM/):
```bash
mkdir /opt/shared
```
Created this file (name is based on what I named the "Access Rule" above), and inside it put the "Access Key" I got above (listed as "Access to" on the "shares" page, https://js2.jetstream-cloud.org/project/shares/):
```bash
nano /etc/ceph/ceph.client.BIO230091-STAMPS-2023-share.keyring
```
And put in this, needing to also add the actual "Access Key" after the equals sign, which is listed as "Access Key" on the "shares" page (name this also the same as the "Access Rule"):
```
[client.BIO230091-STAMPS-2023-share]
key =
```
<!--
Changing permissions:
```bash
chmod 600 /etc/ceph/ceph.client.BIO230091-STAMPS-2023-share.keyring
```
-->
Editing `/etc/fstab` file:
```bash
nano /etc/fstab
```
To hold the following line, but where `$path` is replaced with the information in "Path" from the share creation above:

And `name=` holds what I named the client just above in the file where I put the key (here "BIO230091-STAMPS-2023-share"):
```
$path /opt/shared ceph name=BIO230091-STAMPS-2023-share,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0 2
```
Mounting:
```bash
mount -a
```
May get an error for some other mount not succeeding, but can check the one we wanted did mount:
```bash
df -h | grep /opt/shared
# $path 1000G 0 1000G 0% /opt/shared
```
### Setting timezone to EDT where workshop will be
```bash
timedatectl set-timezone America/New_York
```
### Modifying system-wide bashrc and skel bashrc files
#### Modifying /etc/bash.bashrc
This is the system-wide bashrc file. Only adding conda info to bottom, so just appending here (R is going to be separate, not used in jupyter lab for now; see another one, like [this one](https://hackmd.io/@AstrobioMike/making-GL4U-2023-CSULA-instances), if wanting it to be an R in jupyter notebooks with a kernel):
```bash
cat >> /etc/bash.bashrc << 'EOF'
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/opt/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/miniconda3/etc/profile.d/conda.sh" ]; then
. "/opt/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/opt/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
EOF
```
#### Modifying /etc/skel/.bashrc
This is the bashrc profile that is copied over to new users (which will be created when an instance is launched). Here we are changing the prompt (to display exactly what's needed for scp, e.g., `user@IP:/abs_path$`) and adding conda stuff to the end. Just overwriting the whole file here cause it's easier to just copy and paste this entire codeblock:
```bash
cat > /etc/skel/.bashrc << 'EOF'
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
# getting externally accessible IP address
accessible_IP=$(dig +short myip.opendns.com @resolver1.opendns.com)
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;34m\]\u@${accessible_IP}\[\033[00m\]:\[\033[01;35m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@${accessible_IP}:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@${accessible_IP}: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/opt/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/miniconda3/etc/profile.d/conda.sh" ]; then
. "/opt/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/opt/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
EOF
```
### Installing R
> If i end up needing different R conda environments at some point, maybe try this: https://github.com/grst/rstudio-server-conda#running-locally
Following here: https://linuxize.com/post/how-to-install-r-on-ubuntu-20-04/
```bash
apt install dirmngr gnupg apt-transport-https ca-certificates software-properties-common
```
Adding CRAN repository:
```bash
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/'
```
Installing R:
```bash
apt install r-base
```
That gave me a problem with libicu66. Found help here (https://linuxhint.com/install-r-and-rstudio-ubuntu/), from which i did the following:
```bash
wget http://security.ubuntu.com/ubuntu/pool/main/i/icu/libicu66_66.1-2ubuntu2_amd64.deb
dpkg -i libicu66_66.1-2ubuntu2_amd64.deb
```
Then the r-base install worked:
```bash
apt install r-base
```
### Installing RStudio Server
Following here, with some modifications, from the Install for Debian 10 / Ubuntu 18 / Ubuntu 20 section: https://www.rstudio.com/products/rstudio/download-server/debian-ubuntu/
```bash
apt-get install gdebi-core
wget https://download2.rstudio.org/server/focal/amd64/rstudio-server-2023.06.1-524-amd64.deb
gdebi rstudio-server-2023.06.1-524-amd64.deb
```
This gave me a libssl dependency error:
```
Dependency is not satisfiable: libssl1.0.0|libssl1.0.2|libssl1.1
```
Followed workaround here, https://askubuntu.com/a/1403683, doing the following:
```bash
echo "deb http://security.ubuntu.com/ubuntu focal-security main" | tee /etc/apt/sources.list.d/focal-security.list
apt-get update
apt-get install libssl1.1
gdebi rstudio-server-2023.06.1-524-amd64.deb
rm /etc/apt/sources.list.d/focal-security.list
```
### Installing miniconda3
```bash
curl -O -L https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
```
**During above interactive install steps**
- set install location to `/opt/miniconda3` (user directories not maintained with image creation)
- say "yes" to initialize at end of install
#### Sourcing to activate new environment
```bash
source ~/.bashrc
```
### Installing mamba and wanted programs/envs
```bash
conda install -y -c conda-forge 'mamba>=0.24.0'
mamba install -y jupyterlab==3.6.3
```
#### Anvio-dev
Following anvio docs (https://anvio.org/install/#5-follow-the-active-development-youre-a-wizard-arry) with some modifications/additions, like they had the bioconda channel before conda-forge for some reason, i went the typical bioconda instructions way (conda-forge -> bioconda -> defaults); and where i'm installing, needed to put it in /opt so it would be retained on new instances (also changed in startup commands added at end to point to /opt).
```bash
mamba create -y -n anvio-dev python=3.7
conda activate anvio-dev
mamba install -y -c conda-forge -c bioconda -c defaults python=3.7 \
sqlite prodigal idba mcl muscle=3.8.1551 hmmer diamond \
blast megahit spades bowtie2 tbb=2020.3 bwa graphviz \
"samtools >=1.9" trimal iqtree trnascan-se fasttree vmatch \
r-base r-tidyverse r-optparse r-stringi r-magrittr bioconductor-qvalue fastani
mkdir -p /opt/github && cd /opt/github/
git clone --recursive https://github.com/merenlab/anvio.git
cd /opt/github/anvio/
pip install -r requirements.txt
cat << EOF > ${CONDA_PREFIX}/etc/conda/activate.d/anvio.sh
# creating an activation script for the the conda environment for anvi'o
# development branch so (1) Python knows where to find anvi'o libraries,
# (2) the shell knows where to find anvi'o programs, and (3) every time
# the environment is activated it synchronizes with the latest code from
# active GitHub repository:
export PYTHONPATH=\$PYTHONPATH:/opt/github/anvio/
export PATH=\$PATH:/opt/github/anvio/bin:/opt/github/anvio/sandbox
echo -e "\033[1;34mUpdating from anvi'o GitHub \033[0;31m(press CTRL+C to cancel)\033[0m ..."
cd /opt/github/anvio/ && git pull && cd -
EOF
```
> **NOTE**
> There also needs to be a line added to the bootscript. It is down below, in there on this page, but I'm noting it here too in case you are just looking at this part in the future! It needs this in the `runcmd` section:
> ```
> # launching anvio-dev git pulls, this deals with a "Dubious" ownership message
> - sudo -u stamps -H sh -c "git config --global --add safe.directory /opt/github/anvio"
>```
### After all conda installs, adjusting permissions in those areas
Modifying permissions and removing json files in conda area so users will be able to install things (also changing things in anvio-dev area):
```bash
find /opt/miniconda3 -exec chmod a+rw {} \;
find /opt/miniconda3 -type d -exec chmod a+rwx {} \;
# then also removing cache jsons as this seems to cause problems with mamba installs later
# e.g.: https://github.com/mamba-org/mamba/issues/488#issuecomment-986828363
rm /opt/miniconda3/pkgs/cache/*.json
find /opt/github -exec chmod a+rw {} \;
find /opt/github -type d -exec chmod a+rwx {} \;
```
### Installing R packages
To install some of the required R packages, needed these as well:
```bash
apt-get -y install libcurl4-openssl-dev \
libssl-dev libxml2-dev \
libudunits2-dev libcairo2-dev \
libgdal-dev libharfbuzz-dev \
libfribidi-dev
```
Then installing R packages into `/usr/local/lib/R/site-library/`, can copy/paste this to make the script:
```bash
cat > r-installs.R << 'EOF'
install.packages("BiocManager", lib="/usr/local/lib/R/site-library/", repos='http://cran.us.r-project.org')
install.packages("remotes", lib="/usr/local/lib/R/site-library/", repos='http://cran.us.r-project.org')
BiocManager::install("tidyverse", lib="/usr/local/lib/R/site-library/")
BiocManager::install("phyloseq", lib="/usr/local/lib/R/site-library/")
BiocManager::install("dada2", lib="/usr/local/lib/R/site-library/")
BiocManager::install("decontam", lib="/usr/local/lib/R/site-library/")
BiocManager::install("DESeq2", lib="/usr/local/lib/R/site-library/")
BiocManager::install("tximport", lib="/usr/local/lib/R/site-library/")
BiocManager::install("devtools", lib="/usr/local/lib/R/site-library/")
devtools::install_github("adw96/breakaway")
devtools::install_github("adw96/DivNet")
devtools::install_github("bryandmartin/corncob")
devtools::install_github("mikemc/speedyseq")
devtools::install_github("statdivlab/rigr")
remotes::install_github("ailurophilia/fastnnls")
remotes::install_github("ailurophilia/logsum")
devtools::install_github("statdivlab/tinyvamp")
EOF
```
And running it (takes a while, maybe 40 minutes):
```bash
Rscript --vanilla r-installs.R
```
### Creating Jupyter boot script in /opt/
This sets the jupyter notebook password and is used to launch jupyter lab when a new instance is created with this image (specified to run this script in the bootscript, detailed below in the instance-creation section); that way the instances launch ready to be accessed at jupyter lab links.
We have to put in a hashed passwod, here is an example of how we can create it with an example password:
```python
python
from notebook.auth import passwd
passwd("pw123", algorithm = "sha1")
# 'sha1:e985a3b764c2:ad258b3ca7c3d7fe86283d87731eaa92e87f5206'
```
The output of whatever the real password you put in there would replace what's in the below codeblock at the "sha1" spot, following the `u`.
Got some of that info from [here](https://jupyter-notebook.readthedocs.io/en/stable/public_server.html#preparing-a-hashed-password).
```bash
cat > /opt/jupyter-boot.sh << 'EOF'
#!/bin/bash
rm -rf ~/.jupyter
# making default config files
/opt/miniconda3/bin/jupyter server --generate-config
# setting some things
printf "
c = get_config()
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.password = u'sha1:a8ba30b3c9b5:9dd60607c8eaad898874c9c7d2b1cf5494153e0c'
c.NotebookApp.port = 8000
" >> ~/.jupyter/jupyter_server_config.py
# launching jupyterlab
cd ~/
nohup /opt/miniconda3/bin/jupyter lab > ~/.jupyter/log 2>&1 &
EOF
```
---
## Now burning this image
Don't do this too fast after making the above changes, i think it actually needs a few minutes or the last file changes don't propagate to the burned image (sounds weird and unlikely, I know, but feels like it happened a few times). I've taken to waiting 2 or 3 minutes after my last change even though i'm sure i'm crazy.
Can exit the `ssh` connection, and on the instance page on exosphere on the right side, select "Actions" then "Image":

Named it "STAMPS-2023":

---
## Creating instance from this image
### Create Instance
Through the exosphere interface, begin the process of creating an instance, select to do it by image, and search for STAMPS-2023:

**Add name, select m3.large (for this one), click to show Advanced Options**
- set to "Assign a public IP address to this instance"
- replace what's in the "Boot Script" block with the below Boot Script text
- if using this for reference, and not specifically for the STAMPS 2023 course, you will likely want to alter some things in there, like the user names, passwords, and ssh keys
- but if this is for the STAMPS 2023 course, you can just jump to that block to copy/paste it into the instance creation Boot Script window
#### Modifying cloud-init config
These changed with JetStream2 from deploy scripts. Start-up stuff is handled by a config now (https://docs.jetstream-cloud.org/ui/exo/create_instance/#advanced-options).
Info on boot commands here: https://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot
**New JS2 way**
The BootScript there should be erased, and the following copied and pasted in (modified where appropriate as detailed below if needed).
Things set below:
- adding in a 'titus' user
- adding a 'stamps' user
- both have sshkeys for my computers
- jupyter lab launch script executed
The messy `#cloud-config` codeblock below includes relevant passwords for this event. If needing to make new ones, here's an example of making a 'salted' password, which is how we should put them in the config as done below:
```bash
openssl passwd -1 'pwd123'
# $1$n9TqfAQL$CqERWJdggOBq5cMAGBNCR.
```
That output from a real password would be placed below where appropriate, and adjust the usernames if wanted, then copy and paste into Boot Script window (note that this includes ssh keys for mike and titus).
```
#cloud-config
users:
- default
- name: titus
shell: /bin/bash
groups: sudo, admin, users
sudo: ['ALL=(ALL) NOPASSWD:ALL']{ssh-authorized-keys}
lock_passwd: false
passwd: $1$lS58A.Ee$WIVvQjNvHYojn/iPpuVYj0
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6P9HXOJooxMDwGdGsrTAfMsf8+m28EoSpUJoecfcb8x2D19IG2DBdnfM0Tbg7ofwWDW77bd+XZbIMBbDo/+4HwdNTSDjURsL/rtjZxQrmz+ecoQtnl+J8gdWUy/EHV/kwg5D25kzKCJQpV4ktrA7I3sI2AClAafF5y6glJTV3e6vhgyqGNVH3olo3nYi8pZHJiKt2hmgihq8NxEnsVLqnCS+I6SDR+icPinttqp0nOZgzVIWza82az8LuQRHfQVWJhYp/rVrvgC+9v06/xrNGqvU8WeTKuvq0hKzEAuRpVNu5FPkAfV8aBGntyc3D8uXvMishG/0Bbh9JsU5BuVstKpXa4RgBwRY6QM7tVT0dKFKrPAtD7XM+LAQ+BTK1MXjqnZgTUTLTb91i8isf+fu+vhwYcoFll3ExUq6VcpXmEtUeST79oc2n1Ko16YbUf+dtYu8WocTCt8B9AvTaHax+QvCTeuw8sJzkcyJZZ5tglurg0y+dymyutgXbt1F4zQk= mdlee4@ARLAL0122021055
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcEe5VndHDJdc1Ez5/gPQ7xhqhs/ya2BewHe8bqomNQkAxSRrlBhkKII5wFEAqavngT6zQON1MlJpKXiU89cnfkLHRwWVmCGMWFtM+V3F1m4+bKx5kMVCcPM6l5olrEDkUc/7EohzD1hyKUuQF60IuZlhm1zx81vIj2ZzeGxgtBn+dbiOeGpt2ZcL23/OCYlFLfNafiVUCTVfCjpn26qzzpQN0bW+u90ff4o4pvUjunNuCrs32fLUSVJy9UlUcqLuj9WusX8d7TYug2tGuhyn07G7W2oGW0TDsphoKlBkhFjxxYK/h8kR5G6Nr9JGAo8IVzuqS/EJprwOmgSOd9Y/ODFBU0IrRhKJjc77LHjaPiG4UaMwOSYrzlxtUejSJUvwGZkv2HlDfDbvS2X7MBU8ky/sRfrEEKpXzxaPdKD77bqNg0Y1M20iZ0F9flIjOHfMciVkmB2tW9Dinmo0N2u7F1Zpo/+vDHf+tzZV0j6+PfjJcNdQJih+qp5do6Nv/L08= mike@2Europa.local
- ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFz3WLVqV+0md4OkZi/S0a79cOO7Ax8S4Dledp832JhMQ0GJ0ZlmEnWZrIv83KnRexpAEi5w6H1aSackGjucgQ= t@TitusMatsalmoth
- name: stamps
shell: /bin/bash
groups: users
lock_passwd: false
passwd: $1$XZOTnJiJ$8xCmLvJncgkuCMpWQVZRB1
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6P9HXOJooxMDwGdGsrTAfMsf8+m28EoSpUJoecfcb8x2D19IG2DBdnfM0Tbg7ofwWDW77bd+XZbIMBbDo/+4HwdNTSDjURsL/rtjZxQrmz+ecoQtnl+J8gdWUy/EHV/kwg5D25kzKCJQpV4ktrA7I3sI2AClAafF5y6glJTV3e6vhgyqGNVH3olo3nYi8pZHJiKt2hmgihq8NxEnsVLqnCS+I6SDR+icPinttqp0nOZgzVIWza82az8LuQRHfQVWJhYp/rVrvgC+9v06/xrNGqvU8WeTKuvq0hKzEAuRpVNu5FPkAfV8aBGntyc3D8uXvMishG/0Bbh9JsU5BuVstKpXa4RgBwRY6QM7tVT0dKFKrPAtD7XM+LAQ+BTK1MXjqnZgTUTLTb91i8isf+fu+vhwYcoFll3ExUq6VcpXmEtUeST79oc2n1Ko16YbUf+dtYu8WocTCt8B9AvTaHax+QvCTeuw8sJzkcyJZZ5tglurg0y+dymyutgXbt1F4zQk= mdlee4@ARLAL0122021055
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcEe5VndHDJdc1Ez5/gPQ7xhqhs/ya2BewHe8bqomNQkAxSRrlBhkKII5wFEAqavngT6zQON1MlJpKXiU89cnfkLHRwWVmCGMWFtM+V3F1m4+bKx5kMVCcPM6l5olrEDkUc/7EohzD1hyKUuQF60IuZlhm1zx81vIj2ZzeGxgtBn+dbiOeGpt2ZcL23/OCYlFLfNafiVUCTVfCjpn26qzzpQN0bW+u90ff4o4pvUjunNuCrs32fLUSVJy9UlUcqLuj9WusX8d7TYug2tGuhyn07G7W2oGW0TDsphoKlBkhFjxxYK/h8kR5G6Nr9JGAo8IVzuqS/EJprwOmgSOd9Y/ODFBU0IrRhKJjc77LHjaPiG4UaMwOSYrzlxtUejSJUvwGZkv2HlDfDbvS2X7MBU8ky/sRfrEEKpXzxaPdKD77bqNg0Y1M20iZ0F9flIjOHfMciVkmB2tW9Dinmo0N2u7F1Zpo/+vDHf+tzZV0j6+PfjJcNdQJih+qp5do6Nv/L08= mike@2Europa.local
- ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFz3WLVqV+0md4OkZi/S0a79cOO7Ax8S4Dledp832JhMQ0GJ0ZlmEnWZrIv83KnRexpAEi5w6H1aSackGjucgQ= t@TitusMatsalmoth
ssh_pwauth: true
package_update: true
package_upgrade: {install-os-updates}
packages:
- python3-virtualenv
- git{write-files}
bootcmd:
# have this here and in runcmd so if shelved and restarted it should launch juypter again (here alone didn't work for me)
- sudo -u stamps -H sh -c "bash /opt/jupyter-boot.sh"
runcmd:
# next is launching jupyter lab
- sudo -u stamps -H sh -c "bash /opt/jupyter-boot.sh"
# launching anvio-dev git pulls, this deals with a "Dubious" ownership message
- sudo -u stamps -H sh -c "git config --global --add safe.directory /opt/github/anvio"
- echo on > /proc/sys/kernel/printk_devkmsg || true # Disable console rate limiting for distros that use kmsg
- sleep 1 # Ensures that console log output from any previous command completes before the following command begins
- >-
echo '{"status":"running", "epoch": '$(date '+%s')'000}' | tee --append /dev/console > /dev/kmsg || true
- chmod 640 /var/log/cloud-init-output.log
- {create-cluster-command}
- |-
(which virtualenv && virtualenv /opt/ansible-venv) || (which virtualenv-3 && virtualenv-3 /opt/ansible-venv) || python3 -m virtualenv /opt/ansible-venv
. /opt/ansible-venv/bin/activate
pip install ansible-core
ansible-pull --url "{instance-config-mgt-repo-url}" --checkout "{instance-config-mgt-repo-checkout}" --directory /opt/instance-config-mgt -i /opt/instance-config-mgt/ansible/hosts -e "{ansible-extra-vars}" /opt/instance-config-mgt/ansible/playbook.yml
- ANSIBLE_RETURN_CODE=$?
- if [ $ANSIBLE_RETURN_CODE -eq 0 ]; then STATUS="complete"; else STATUS="error"; fi
- sleep 1 # Ensures that console log output from any previous commands complete before the following command begins
- >-
echo '{"status":"'$STATUS'", "epoch": '$(date '+%s')'000}' | tee --append /dev/console > /dev/kmsg || true
mount_default_fields: [None, None, "ext4", "user,exec,rw,auto,nofail,x-systemd.makefs,x-systemd.automount", "0", "2"]
mounts:
- [ /dev/sdb, /media/volume/sdb ]
- [ /dev/sdc, /media/volume/sdc ]
- [ /dev/sdd, /media/volume/sdd ]
- [ /dev/sde, /media/volume/sde ]
- [ /dev/sdf, /media/volume/sdf ]
- [ /dev/vdb, /media/volume/vdb ]
- [ /dev/vdc, /media/volume/vdc ]
- [ /dev/vdd, /media/volume/vdd ]
- [ /dev/vde, /media/volume/vde ]
- [ /dev/vdf, /media/volume/vdf ]
```
Then click "Create".
Specific things checked:
- [x] jupyter (\<IP\>:8000)
- [x] rstudio (\<IP\>:8787)
- [x] r packages
- [x] anvio link when anvi-interactive is running (\<IP\>:8080)
- to test, from a terminal:
- `conda activate anvio-dev`
- `anvi-self-test --suite mini`
- eventually it will get to where it says the interactive interface is active at 0.0.0.0:8080, then go to the \<IP\>:8080 to access
- [x] conda/mamba install, and use as 'stamps' user
- [x] shared space `/opt/shared`
- [x] regular 'ol ssh from a terminal
- `ssh stamps@<IP>`
---
## CLI interface
After setting up the CLI stuff, e.g. see my notes and links to their docs [here](https://hackmd.io/Cagi8l71TPqRTv-knlKIBg), can grab all wanted IPs and do anything wanted with a script if/when there are additions needed. Or do things like grab all IPs to automate making a table with participant names and links to things like their specific jupyter lab/RStudio/anvio pages (rather than needing to copy/paste individual IPs from the exosphere/jetstream2 interface).