---
tags: stamps2025
---
[TOC]
# STAMPS - intro to our computing infrastructure! (STAMPS 2025)
Titus Brown, July 15, 2025
<!--
::::danger
Reminder to Titus: reset cloud computer :)
- conda env
- file system
::::
--->
## Survey!
Please! Have a 30 second survey!
[link](https://docs.google.com/forms/d/1tJYnba1LyR4N7oXn2j4_7iTMNIdLR2HKTIMejeajZr4/preview)
## Computing: some background info
We are using Jetstream, part of NSF ACCESS, set up by the inimitable Mike Lee (AstroBioMike). These are remote computers with reasonably substantial memory and compute and disk space. Talk to Titus if you're interested in using this after the class!
Lots of things have been pre-installed on these computers, almost all via conda (more about that later). This is mostly to make things go faster. All software we are showing you is free, and in most cases is straightforward to install.
(If you want to install a specific piece of software on your laptop or institutional compute cluster, we might be able to help if you run into problems!)
## But first... computing philosophy!
We try to teach at multiple overlapping levels: conceptual, scientific, practical, and technical.
All the programs change all the time, so we won't over emphasize specific programs.
But we are teaching the practice of microbiome data analysis, so we will teach you _something_.
And, more generally, there are a few techniques that are foundational. Different software package, implement them in various different ways, but they usually have common opportunities and challenges. So when we teach specific techniques, we will emphasize how and why they are being used here, and what alternatives there are.
(Please ask questions about alternatives to any particular approach, as well as opportunities and challenges of each approach!)
Bioinformatics generally and microbiome analysis specifically is a surprisingly friendly and helpful space - or at least we think it should be! So we will spend more time explaining our opinions than simply delivering them, and will also try to highlight alternatives.
No one here - no student, TA, or faculty - should (will!) make you feel poorly about your knowledge, your skillset, or your science. Everyone here has things to learn, and we are all here to help each other learn!
One perspective that Amy and I bring to bioinformatics (and STAMPS) is that "black box" pipelines come with drawbacks - they are often tuned for different uses, can be a bit fragile and heavyweight, and (worst of all) make decisions for you. So we are all about "opening" the black box here. But! We will not neg pipelines and tools - our goal is to let you all figure out which knobs you can and maybe should turn.
So, please ask questions as you have them!
(I, Titus, really do like automated workflows, as implemented in snakemake and nextflow and WDL and CWL. More later.)
### A perhaps strange digression
Here are a few tensions that often come up:
* Data science and scripting vs programming and software development.
* Interactive, exploratory data analysis vs large-scale computing.
* Automation is good not just because it enhances reproducibility, but also efficiency.
We will be exploring these tensions throughout the course. I think we are "weakest" on programming and automation and that is by design. I am happy to discuss over lunch, in front of a whiteboard, etc.
### What is a bioinformatician??
I have strong opinions about common divisions in bioinformatics:
vs
1. biomedical data scientists who use computers to do their biology;
2. workflow-enabled biologists who develop large, complex, multimodal data analysis workflows;
3. bioinformaticians who develop new algorithms and methods;
Almost everyone here will probably identify more with (1) and (2). Methods developer, (3), often have a different background and may know LESS biology and MORE computing.
I think (2) is a huge teaching/training need and opportunity.
## Teaching philosophy!
I follow a modified "Carpentries" teaching style, as do many of the instructors here at STAMPS! Features:
* mix of lecture, theory, and hands-on;
* start at the beginning;
* move slowly and encourage questions;
* go "off script" as needed, but not _too_ much;
The main modification I use for bioinformatics teaching is that I encourage copy-pasting and use of remote computing.
* Copy-pasting is better when you have long, complex commands;
* Remote computers are better when you have multiple complex packages to install, and "big" compute stuff to run;
Other teachers will use different techniques, of course, but we are all united in encouraging questions and discussion! Right?? <looks around meaningfully></looks>
## Introducing stickies
Green sticky up - good to go
Sticky not up - still working
Arm up - please send help kthxbye
## Using hackmd as a lab notebook
Free! Collaborative! Persistent! Shareable! Updateable! Friendly to compute commands!
(And good for teaching, too.)
Can connect to GitHub ($$).
Also see:
* Obsidian (another Markdown based approach)
* Notion (a very popular notebook approach)
### Basics of Markdown
You can indicate headers with hashes:
```
# first level header
### third level header
#### fourth level header
```
Lists:
```
* a point
* another point
```
becomes
* a point
* another point
---
Verbatim code chunks!
````
```python
print('hello, world')
```
````
will produce this:
```python
print('hello, world')
```
Links are easy to add:
```
[link text](https://google.com/)
```
becomes:
[link text](https://google.com/)
And you can drag and drop/insert images, too!
### Some hackmd specific stuff
:::success
colored blocks.
:::
click to reveal
:::spoiler Spoilers
sekret code!
:::
## Getting set up on our computers
List of cloud computers:
https://github.com/mblstamps/stamps2025/wiki/Accessing-our-cloud-computers
Find the computer next to your name. There are three links - two of which you will be using today!
The username and password for logging into these computers is on the board :).
A few things to mention:
* you cannot do any serious harm to these cloud computers. At the worst you will need to switch to a backup cloud computer and rerun a few things!
* We can absolutely start up new computers as needed!
So don't worry about the computing: you can compute freely - full speed ahead, and darn the consequences!!
## A tour of our personal STAMPS 2025 computer!
### RStudio


### JupyterHub

### ssh
An alternative way of connecting into the shell; Titus to demo :).
### ...it's all the same underneath
These are all different interfaces to the same file system on the same computer!
If you (e.g.) create (and save) a file in one interface, it is available in the other.
However, there is no coordination between the different interfaces. So I suggest sticking with one interface for *writing/editing/modifying* files.
### And, confusingly, they do more than their names
RStudio has R in its name, and was developed for R, but we will also be using it to run shell commands and edit files.
JuPyteR is named for Julia, Python, and R; it is mostly used for Python. But we will use it to run shell commands and edit files, too.
You can also run R and Python in the shell. :confused:
(Insert "Inception" reference here.)
## A trial run
The only way out is through!
### Step 0: Create a new folder, "day1"
Connect to the RStudio interface for your computer (feel free to use another interface if you like - Jupyter or ssh!).
Create a folder 'day1'.
### Step 1: download and then upload a file into "day1"
Download [SRR11125891.sig](https://github.com/mblstamps/stamps2025/raw/refs/heads/main/sourmash-data/SRR11125891.sig) to your laptop, and then upload that into your "day1" folder.
You should now have a file `SRR11125891.sig` in your day1 directory.
::::info
Put up your green stickies when you get here :).
::::
### Step 2: open a Terminal or shell
Take down your stickies.
Get to a prompt that looks like this:
```
(base) stamps@149.165.151.217:~$
```
(You can do this in RStudio by clicking on Terminal.)
This prompt can be decoded as follows:
```
(base) stamps@149.165.151.217:~$
^^^^ ------------------------- conda environment
^^^^^^ ------------------ username
^^^^^^^^^^^^^^^ -- computer
^ directory location
^^ add commands here
```
Now change into the directory `day1`:
```
cd ~/day1
```
and activate the installed set of software in the `sourmash` conda environment:
```
conda activate sourmash
```
and your prompt should now look like this:
```
(sourmash) stamps@149.165.151.217:~$
```
If you type `ls` you should see:
```
SRR11125891.sig
```
which is the set of files in this directory.
Put up your yellow stickies when here.
### Step 3: Search this metagenome against all known reference genomes
Take down your yellow stickies. Run:
```
sourmash gather -k 51 --scaled 10_000 \
SRR11125891.sig \
/opt/shared-2/sourmash-db/entire-2025-07-11.k51.rocksdb \
-o SRR11125891.gather.csv
```
Note: you can copy/paste multiple lines all at the same time.
In the ideal world this will take about 10 seconds to run.
Put up your yellow stickies when done. And let me know if it seems stuck :sob:.
::::spoiler A brief explanation; more on Thursday
Here we are asking to search the metagenome [SRR11125891](https://www.ebi.ac.uk/ena/browser/view/SRR11125891) against a combined eukaryotic+GTDB rs226 database, using the [sourmash](https://sourmash.readthedocs.io/) software. The file being searched is a "sourmash signature" file that contains a subset of of the k-mers from that metagenome, and we are searching it against a pre-built database.
The k-mer size being used is 51, and the compression ration (`--scaled`) being used is 10,000.
We'll talk more about all of this on Thursday! Promise!
::::
You can look at the output file if you like - you can open it directly in RStudio, or download it and open it in Excel, or ... Here's a guide to the columns [link](https://sourmash.readthedocs.io/en/latest/classifying-signatures.html#appendix-d-gather-csv-output-columns).
The single most important column here is `f_unique_weighted`, which is a lower bound estimate of the fraction of metagenome reads that will map to each matching genome sequence. Each read is assigned to only one genome. More on Thursday!
### Step 4: Assign taxonomy
Stickies down!
The file contains genome names - and lots of them. Let's connect them to taxonomic information:
```
sourmash tax metagenome -g SRR11125891.gather.csv \
-t /opt/shared-2/sourmash-db/entire-2025-07-11.lineages.sqldb \
-o SRR11125891
```
Stickies up when this finishes running. It should take 10 seconds.
This output file contains a rollup summary of how many reads from the metagenome will map to genomes in each taxonomic unit. Each read is assigned to only one unit at each leve.
### Step 5: Generate a sankey plot
Stickies down.
Let's viz!
```
sourmash scripts sankey --summary-csv SRR11125891.summarized.csv \
-o SRR11125891.sankey.html
```
Now open this file in your Web browser.
Stickies up!
You should see:

This is a sankey "flow" diagram showing the breakdown of how each taxonommic unit is broken down hierarchically.
(This is generated by the `sankey` command from the [sourmash 'betterplot' plugin](https://github.com/sourmash-bio/sourmash_plugin_betterplot).)
Does anything stand out? :)
### Step 6: Install and run taxburst
Stickies down!
Let's viz a second way.
Let's build a sunburst-style "Krona" diagram that shows the same information, using the 'taxburst' software. (Note: [taxburst](https://github.com/taxburst/taxburst) is a four day old "fork" of Krona :laughing: ).
Run this once:
```
conda tos accept --override-channels --channel defaults
```
First create a new conda environment named 'taxburst' that contains only Python:
```
conda create -n taxburst -y python=3.12
```
then activate that environment with 'conda activate', and install the taxburst software with pip:
```shell
conda activate taxburst
pip install -U taxburst
```
Finally! Run taxburst:
```shell
taxburst SRR11125891.summarized.csv \
-o SRR11125891.taxburst.html
```
and now open that file in your browser.
Stickies up!
You should see something like the below, but interactive. Explore the interface!

:::info
You can no longer run sourmash: try typing `sourmash`. Why not??
:::
## Phew. That's a lot.
We did a thing! A few things, actually!
Pat yourself on the back. Take a moment. Relax.
Now, think of questions. Maybe we'll grab some coffee here.
## Revisiting the trial run: what did we do and how did we do it?
We did the following:
* created a project (day)-specific working directory to store all our files in. Why?
* changed to that working directory;
* activated some pre-installed software;
* put a file in place;
* ran a few programs (we'll talk about them on Thursday!)
* looked at an output file;
* created a new conda environment and installed a new piece of software!!!!
* ran that new piece of software and looked at a _new_ output file.
### Filenames
Filenames: why am I naming files and conda environments like this?
**Provenance**. It's how I kinda track what's going on.
UNIX doesn't (generally) care about filenames. RStudio and JupyterLab _sometimes_ do.
(I'm so sorry.)
### Installing software
Lots of software is installable via conda, including R and Python themselves. Above, we used it to install python 3.12.
`pip` is the Python-specific package installation command; we used it to install the last version of taxburst.
...In theory I could make taxburst installable via conda, but I have not done so yet...
How do you know how to install stuff?
* Look at the installation instructions for your package of interest, e.g. [taxburst install instructions](https://github.com/taxburst/taxburst?tab=readme-ov-file#install).
* Google the question :laughing: "can I conda install <program>". AI may lie to you but hopefully won't.
* If it's on pypi.org, e.g. https://pypi.org/project/taxburst/, you can usually pip install it.
Basically it involves some guessing, but conda is a good start.
### ...it's all the same underneath, part 2
Conda environments (and software installs) are persistent!
Input and output files are persistent across connections and reboots!
Your running shell environment (_which_ conda environment is active; what directory you're in) is _not_ persistent. If you log out you may need to reset, e.g.
```
cd ~/day1
conda activate sourmash
```
and it will never _hurt_ to do that.
## Philosophical thoughts: why are there so many moving parts??
Lots of reasons, including just historical contigency... Here's my attempt at a minimal explanation:
* Interactive interfaces like RStudio and Jupyter are good at doing things interactively... but sometimes you want to leave your computer to run over night! And that's where shell comes in really handy: for automation.
* The shell is also really good at manipulating file names/locations and running programs on them, but less good at dealing with the contents of the files. So we use R and Python for file examination, and UNIX shell for organizing files.
* R and Python have significantly overlapping areas of expertise, but R is better for statistic and data viz, while Python is better for certain kinds of machine learning and programming tasks.
Realistically to do metagenomics (which involves large files that take a long time to process) you will often _start_ with shell and then move into R and Python for digesting the results.
To confuse things more... programs like sourmash are written IN Python but you run them VIA the shell. INCEPTION.
### Why so many different programs?
This is embodying the kind of "carpentry" aspect. There are many different things you can do at each stage of sourmash (for example) so we break it down into simple blocks. Search, then taxonomize, then visualize one way, then visualize another way. Use program X, or program Y, or language Z... etc.
### Different styles fit different people
How I do things:
(You do not need do things my way :)
I use Python a lot. My lab uses R a lot. (R is a good starting point!)
I use the shell a whole bunch. You kind of have to if you are going to do anything large scale, because of automation.
I use conda a lot, to install software. You should, too, if you're doing bioinformatics at the command line.
I use 'screen' a lot; other people use 'tmux', or nothing at all. You might be interested in this if you are finding it difficult to keep interactive UNIX programs running and/or track program output. Especially if you're using 'srun' a lot... Ask me!
### Inspecting computers and their properties.
`free` will tell you how much working memory (RAM) you have available. The most important number here is 'total' - each computer has 58 GB of RAM available.
```
(base) stamps@149.165.151.217:~$ free -h
total used free shared buff/cache available
Mem: 58Gi 2.2Gi 54Gi 1.6Mi 2.7Gi 56Gi
```
`top` will tell you how much is currently running. Use 'q' to get out of the interface.
`df -h ~/` will tell you how much disk space is available in your home directory
```
df -h ~/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 58G 44G 14G 76% /
```
the important number here is "Avail", that's how much free disk space you have!
### Some advanced commands
`scp` and `sftp` are great for transferring files between remote computers. If you want to transfer a 10 GB file from your lab's remote computer to your STAMPS computer, this is the way!
## Some concluding thoughts
Happy to do a whiteboarding session on "how computers (and software) work" some evening.
Also happy to do small group/individual discussions on your computing environment and your computing needs. Evening sessions are good times for this; schedule something with me! I'll be here the entire course! :)
## Links to follow!
R tutorials tonight!! :)
[Software Carpentry shell tutorial](https://swcarpentry.github.io/shell-novice/)
[Conda for installing software](https://hackmd.io/VTcCz9dmSf6vclaHRwavlw?view)
[Happy Belly Bioinformatics!](https://astrobiomike.github.io/all_tutorials/) has tutorials on many things. Happy to talk you through the options!
### More advanced links:
[Shell scripting for automation](https://hackmd.io/Sksqf7jXTHqbq0BC4oEJzQ?view)
[A brief overview of automation and parallelization options in UNIX/on an HPC](http://ivory.idyll.org/blog/2023-automation-and-parallelization.html)
snakemake for automating workflows: [a draft book](https://ngs-docs.github.io/2023-snakemake-book-draft/)