<img src="https://github.com/InsightSoftwareConsortium/GetYourBrainStraight/blob/main/HCK01_2022_Virtual/logos/banner.png?raw=true" width="100%" />
---
###### tags: `Spring 2022` `Brain Image Library` `hackathon`
# Get Your Brain Straight
## How to create a new tutorial
- Example tutorial topics include how to use an open source registration tool, how to access an open access brain imaging data and the data's properties, or how to use community compute and data archive resources.
- Post any questions about the tutorial idea and team on the [Get Your Brain Straight mailing list][mailing-list], our communication mechanism.
- When you are ready, add a new entry in the list of **Tutorials** by creating a new `README.md` file in a subfolder of the `Tutorials` folder, and copying contents of the [tutorial description template][tutorial-description-template] file into it. Step-by-step instructions for this are:
1. Open [tutorial description template][tutorial-description-template] and copy its full content to the clipboard
1. Go back to [Tutorials](https://github.com/InsightSoftwareConsortium/GetYourBrainStraight/tree/main/HCK01_2022_Virtual/Tutorials/) folder on GitHub
1. Click on *Add files* -> *Create new file* buttons.
1. Type `YourTutorialName/README.md`
1. Paste the previously copied content of tutorial template page into your new `README.md`
1. Update at least your project's **Title, Instructors, Tutorial Description** sections
1. Add a link to your project to the [main tutorial list](../#tutorials-how-to-add-a-tutorial)
Note: some steps above may require creating a [pull request](https://help.github.com/articles/creating-a-pull-request/) until your account is given write access.
[mailing-list]: https://groups.google.com/g/brain_straight_hackathon_announcements
[tutorial-description-template]: https://raw.githubusercontent.com/InsightSoftwareConsortium/GetYourBrainStraight/main/HCK01_2022_Virtual/Tutorials/Template/README.md
## Resources
:::info
:popcorn: Before we begin: whenever you are reading this document and see a code block like this one
```=
echo -e "I am a code block\!"
```
it is meant to list code that you can run on a terminal like this one when connected to our resources

:::
As a member of the [Brain Image Library](https://www.brainimagelibrary.org/) project you have access to
* The virtual machine `workshop.brainimagelibrary.org` with 2.5TB, 56 cores and RTX8000 GPU with 4608 cores and 48GB memory
* The virtual machine `workshop2.brainimagelibrary.org` with 1.5TB, 144 cores (hyperthreaded) and 2 [NVIDIA V100](https://www.nvidia.com/en-us/data-center/v100/) GPUs, each with 5120 cores and 32GB memory, coupled with NVLink.
* 8 large-memory compute nodes that can be accessed using SLURM from within the virtual machine in a partition named `compute`.
:::info
:bulb: The VM `workshop.bil.psc.edu` will be generally online for use by members of the Brain Image Library. If a resource is unavailable or should be become unavailable for updates or upgrades, then you will receive a notification from the team.
:::
### Connecting to the `workshop` VM
#### Using x2go
* Download and install the appropriate x2go client (Windows, Linux, or Mac) from [here](https://wiki.x2go.org/doku.php).
* Start the x2go client.

* Under the Session menu, select `New Session`
* Enter your VM name under host `workshop.bil.psc.edu`
* Enter your PSC login name: (ex. `ropelews` or `icaoberg`)
* Under session type select `MATE`

* On the right side of the screen you should see a box called new session.

* Move your mouse to the words `New session` and click the left mouse button.
* Log in using your username and password
* A new window will appear (usually within 10 seconds) . If you click on your left mouse button in this new window, a submenu will appear. Select `xterm` to start a terminal.

* From the `xterm` window, start an application with graphical output, such as vaa3D or Fiji. The application will appear in the window.
#### Using Terminal
Open terminal and run the command
```
ssh <your-username>@workshop.bil.psc.edu
```
For example,
```
ssh icaoberg@workshop.bil.psc.edu
icaoberg@workshop.bil.psc.edu's password:
Last login: Mon Jan 24 10:46:38 2022 from pool-71-162-2-190.pitbpa.fios.verizon.net
********************************* W A R N I N G ********************************
You have connected to workshop.bil.psc.edu
This computing resource is the property of the Pittsburgh Supercomputing Center.
It is for authorized use only. By using this system, all users acknowledge
notice of, and agree to comply with, PSC polices including the Resource Use
Policy, available at http://www.psc.edu/index.php/policies. Unauthorized or
improper use of this system may result in administrative disciplinary action,
civil charges/criminal penalties, and/or other sanctions as set forth in PSC
policies. By continuing to use this system you indicate your awareness of and
consent to these terms and conditions of use.
LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning
Please contact support@psc.edu with any comments/concerns.
********************************* W A R N I N G ********************************
````
If you can see the message above when you connect, then you should be ready to start using the resources.
### LMOD
<img src='https://i.imgur.com/TiNg8y8.png' width="25%" />
Lmod is a Lua based module system that easily handles the MODULEPATH Hierarchical problem. Environment Modules provide a convenient way to dynamically change the users’ environment through modulefiles.
In a nutshell, we use LMOD to manage software that can be used in the VM as well as the large memory nodes. Software available as modules should be accessible on both resources.
This document only lists a few commands. For complete documentation click [here](https://lmod.readthedocs.io/en/latest/010_user.html).
:::info
:bulb: If you want us to install a piece of software in our resources, then please remember to submit software installation requests to `bil-support@psc.edu`.
:::
#### Listing available modules
To list all available software modules use the command
```
module avail
```
For example
```
module avail
-------------- /bil/modulefiles ---------------
anaconda/3.2019.7
anaconda3/4.10.1
aspera/3.9.6(default)
bcftools/1.9(default)
bioformats/6.0.1
bioformats/6.1.1
bioformats/6.4.0
bioformats/6.5.1
bioformats/6.6.1(default)
bioformats2raw/0.2.4(default)
c-blosc/1.19.0(default)
dust/0.5.4
ffmpeg/20210611
```
The command above will list all available software.
:::info
:envelope: Cannot find the software you need to explore the collections? Then please send a request to `bil-support@psc.edu`.
:::
#### Listing specific modules
To list specific modules use the command
```
module avail <package-name>
```
For example,
```
module avail matlab
-------------- /bil/modulefiles ---------------
matlab/2019a matlab/2021a
```
#### Listing useful information
To list useful info about a module use the command
```
module help <package-name>
```
For example,
```
module help matlab
----------- Module Specific Help for 'matlab/2021a' ---------------
Matlab 2021a
------------
To enable, first load the following required modules (via module load command):
module load matlab/2021a
For a full list of binaries included in this module, type
module what-is matlab/2021a
```
#### Loading modules
To load a module use the command
```
module load <package-name>
```
For example,
```
module load matlab/2021a
```
Running the command above will make the matlab binary available in the current session
```
which matlab
/bil/packages/matlab/R2021a/bin/matlab
```
In this example, you can simply type `matlab` to start the Matlab engine
```
matlab -nodesktop
MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2021 The MathWorks, Inc.
R2021a Update 5 (9.10.0.1739362) 64-bit (glnxa64)
August 9, 2021
To get started, type doc.
For product information, visit www.mathworks.com.
>>
```
##### Loading a specific version of a module
There are times when there are multiple versions of the same same software.
For example,
```
module avail bioformats
---------------------- /bil/modulefiles ----------------------
bioformats/6.0.1 bioformats/6.7.0
bioformats/6.1.1 bioformats/6.8.0(default)
bioformats/6.4.0 bioformats2raw/0.2.4
bioformats/6.5.1 bioformats2raw/0.3.0(default)
bioformats/6.6.1
```
If you wish to load a specific version of a package use the command
```
module load <package>/<version>
```
For example,
```
module load bioformats/6.4.0
```
#### Listing loaded modules
To list the loaded modules use the command
```
module list
```
For example,
```
module list
Currently Loaded Modulefiles:
1) matlab/2021a
```
#### Unload module
To load a module use the command
```
module unload <package-name>
```
for example
```
module unload matlab/2021a
```
#### Using modules on scripts
When building scripts that are using more than one tool available as modules, simply type the module command for each tool
```=
#!/bin/bash
module load matlab/2021a
module load bioformats
```
### SLURM

[Slurm](https://slurm.schedmd.com/documentation.html) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
This document only lists a few commands. For complete documentation click [here](https://slurm.schedmd.com/documentation.html).
#### sinfo
```
sinfo - View information about Slurm nodes and partitions.
SYNOPSIS
sinfo [OPTIONS...]
```
For example
```
sinfo -p compute
```
#### squeue
```
squeue - view information about jobs located in the Slurm scheduling queue.
SYNOPSIS
squeue [OPTIONS...]
```
For example
```
squeue -u icaoberg
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
14243 compute script.s icaoberg R 15:34 1 l001
```
#### scontrol
```
scontrol - view or modify Slurm configuration and state.
SYNOPSIS
scontrol [OPTIONS...] [COMMAND...]
```
As a regular user you can view information about the nodes and jobs but won't be able to modify them.
The view information about the nodes use the command
```
scontrol show nodes
```
To view information about a specific node, use the node name to print this information. For example
```
scontrol show nodes l002
NodeName=l002 Arch=x86_64 CoresPerSocket=20
CPUAlloc=0 CPUTot=80 CPULoad=0.03
AvailableFeatures=(null)
ActiveFeatures=(null)
Gres=(null)
NodeAddr=l002 NodeHostName=l002 Version=18.08
OS=Linux 4.18.0-305.7.1.el8_4.x86_64 #1 SMP Tue Jun 29 21:55:12 UTC 2021
RealMemory=3000000 AllocMem=0 FreeMem=3090695 Sockets=4 Boards=1
State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
Partitions=compute
BootTime=2021-07-16T15:47:48 SlurmdStartTime=2021-08-03T20:58:25
CfgTRES=cpu=80,mem=3000000M,billing=80
AllocTRES=
CapWatts=n/a
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
```
Because there exists one partition, the you can run `sinfo` or `sinfo -p compute` to gather basic information about this partition.
For example
```
→ sinfo -p compute
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
compute* up infinite 8 idle l[001-008]
```
#### sbatch
```
sbatch - Submit a batch script to Slurm.
SYNOPSIS
sbatch [OPTIONS(0)...] [ : [OPTIONS(N)...]] script(0) [args(0)...]
```
##### interact
The interact command is an in-house script for starting interactive sessions
```
interact -h
Usage: interact [OPTIONS]
-d Turn on debugging information
--debug
--noconfig Do not process config files
-gpu Allocate 1 gpu in the GPU-shared partition
--gpu
--gres=<list> Specifies a comma delimited list of generic consumable
resources. e.g.: --gres=gpu:1
--mem=<MB> Real memory required per node in MegaBytes
-N Nodes Number of nodes
--nodes
-n NTasks Number of tasks (spread over all Nodes)
--ntasks-per-node=<ntasks> Number of tasks, 1 per core per node.
-p Partition Partition/queue you would like to run on
--partition
-R Reservation Reservation you would like to run on
--reservation
-t Time Set a limit on the total run time. Format include
mins, mins:secs, hours:mins:secs. e.g. 1:30:00
--time
-h Print this help message
-?
```
* At the moment, there only exists one partition named `compute`, so running
```
interact
```
or
```
interact -p compute
```
* To specify the amount of memory use the option `--mem=<MB>`. For example `interact --mem=1Tb`
* This is a shared partition, if you wish to get all the resources in a compute node, use the option `--nodes`. For example `interact -N 1`. Since this is a shared resource, please be considerate using this resource.
#### scancel
```
scancel - Used to signal jobs or job steps that are under the control of Slurm.
SYNOPSIS
scancel [OPTIONS...] [job_id[_array_id][.step_id]] [job_id[_array_id][.step_id]...]
```
There is no need to
* To cancel a specific job use the command `scancel <job_id>`. For example `scancel 00001`
* To cancel all your running jobs use the command `scancel -u <username>`. For example `scancel -u icaoberg`.
### Docker
Docker is not supported and it is not expected to be supported in the near future.
#### uDocker
<img src="https://github.com/indigo-dc/udocker/raw/master/docs/logo-small.png" width="25%"/>
If you want to run a program/tool and not a service, then [uDocker](https://github.com/indigo-dc/udocker) might be an option for you.
To install uDocker
```
module load anaconda3/4.11.0
pip install --user udocker
```
or
```
module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install udocker
```
For example,
```
udocker pull jtduda/python-itk-sitk-ants:0.1.0
```
### Singularity
#### Building containers
##### If you have elevated privileges to run Singularity
Make sure to run singularity builds as `sudo`, that’s all that matters.
:::warning
:warning: If you are constantly building containers, then run
```
singularity cache clean
```
often to clean cache.
:::
#### If you don’t have elevated privileges
This applies to all regular users including researchers and hackathon participants. If you do not have elevated privileges, you can still build the images remotely.
Follow these steps to do so
* Create an account on SyLabs.io.
* Click `Access Tokens` on the top-right menu

* Click Create a `New Access Token`

* Click `Copy token to Clipboard`

* Login to the `workshop` VM and run the command
```
singularity remote login
```
* Paste the token and click `Enter`.
Just make sure to use the `--remote` flag when running singularity build.
Check the `rbuild.sh` scripts in each repo for working examples.
:::warning
:warning: If you are constantly building containers remotely, make sure to erase them from your account to avoid running out of space.
:::
To see a list of vetted containers built by PSC, click [here](https://github.com/pscedu/singularity).
#### Example
<img src="https://camo.githubusercontent.com/d18f42e3f0fc2eb3d39084c23a19eaf5a65c25ed7d14fcef9c00c8176680fa95/68747470733a2f2f75706c6f61642e77696b696d656469612e6f72672f77696b6970656469612f636f6d6d6f6e732f382f38302f436f777361795f5479706963616c5f4f75747075742e706e67">
You can find a Singularity definition file [here](https://github.com/pscedu/singularity-cowsay). To build this image remotely run
```
git clone https://github.com/pscedu/singularity-cowsay.git
cd 3.04
singularity build --remote cowsay.sif Singularity
```
after getting a token from SyLabs.io.
## Other
### Installing Miniconda
<img src="https://upload.wikimedia.org/wikipedia/commons/e/ea/Conda_logo.svg" width="50%" /><br>
There is nothing preventing you from installing a Conda distribution in your home directory, though this is not advised.
However if you need to, you might want to start with a Miniconda distribution
```=
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash ./https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
```
and follow the instructions on screen.
:::info
:pencil: If you use the default values in the Conda install, it will install in your home directory on `/bil/`
:::
### Using Jupyter Notebooks
#### Loading the proper module
Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.
```
module load anaconda3
jupyter lab
```
Running the commands above will open a browser

The Anaconda3 installation available in BIL infrastructure has access to commonly used packages used in data analytics but it also has a list of unique packages commonly used in neuroscience like
* allensdk
* biopython
* bokeh
* Dask
* napari
* nilearni
* neuron
* pandas
* pytorch
* scikit-image
* scikit-learn
* starfish
* Theano
To see a full list of packages of packages, run the command in terminal
```
pip list
```
:::info
:bulb: Nothing prevents you from downloading and installing your own Anaconda/Miniconda distro in your home directory. However, user supported is limited if you choose to do so.
For more info, click [here](https://docs.anaconda.com/anaconda/install/linux/).
:::
#### Using Jupyter Notebooks on the large memory nodes
This document explains how to run a Jupyter notebook on BIL using a Jupyter client on the workshop VM.
Follow the steps below to run a Jupyter notebook on using a Jupyter client on your local machine.
1. Login to the workshop VM using `x2go`.
2. Load an anaconda module to put the latest version of anaconda and Jupyter in your path.
```
module load anaconda3
```
3. Get a BIL compute node allocated for your use. Get a BIL compute node allocated for you by using the `interact` command. For example,
```
interact -n 10 --mem=64Gb
A command prompt will appear when your session begins
"Ctrl+d" or "exit" will end your session
```
5. Find the hostname of the node you are running on
You will need the hostname when you are mapping a port on your local machine to the port on BIL compute node. Find the hostname of the node you are on from the prompt, or type the hostname command.
```
04:37:12 icaoberg@l001 ~ → hostname
l001.pvt.bil.psc.edu
```
6. Start a Jupyter notebook and find the port number and token
Start a Jupyter notebook. From the output of that command, find the port that you are running on and your token. Pay attention to the port number you are given. You will need it to make the connection between the compute node and the workshop VM.
The port number Jupyter uses on the compute node can be different each time a notebook is started. Jupyter will attempt first to use port 8888, but if it is taken – by a different Jupyter user for example – it increases the port number by one and tries again, and repeats this until it finds a free port. The one it settles on is the one it will report.
```
jupyter notebook --no-browser --ip=0.0.0.0
[W 2022-03-22 16:45:13.993 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2022-03-22 16:45:13.994 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2022-03-22 16:45:13.994 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[I 2022-03-22 16:45:14.013 LabApp] JupyterLab extension loaded from /bil/packages/anaconda3/4.11.0/lib/python3.9/site-packages/jupyterlab
[I 2022-03-22 16:45:14.013 LabApp] JupyterLab application directory is /bil/packages/anaconda3/4.11.0/share/jupyter/lab
[I 16:45:14.024 NotebookApp] Serving notebooks from local directory: /bil/users/icaoberg
[I 16:45:14.024 NotebookApp] Jupyter Notebook 6.4.5 is running at:
[I 16:45:14.025 NotebookApp] http://l001.pvt.bil.psc.edu:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
[I 16:45:14.025 NotebookApp] or http://127.0.0.1:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
[I 16:45:14.025 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 16:45:14.039 NotebookApp]
To access the notebook, open this file in a browser:
file:///bil/users/icaoberg/.local/share/jupyter/runtime/nbserver-1022410-open.html
Or copy and paste one of these URLs:
http://l001.pvt.bil.psc.edu:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
or http://127.0.0.1:8888/?token=6dd14be951e5717631f9646d292a7dae06c8173c1d850c1b
```
7. Map a port on your local machine to the port Jupyter is using on the BIL compute node. Open another terminal to map the port 8888 in the workshop VM machine to the port you are using (8888 in this example) on the compute node. This is a bit of the chicken-and-the-egg situation: if you knew what node and what port you would end up on, you could have done the mapping on the first connection. But there is no way to know that a-priori.
In the new terminal type
```
ssh -L <local-port>:<compute-host-name>:<compute-node-port> workshop.bil.psc.edu -l <username>
```
You must use the correct compute node name and port that you have been allocated. In this case, because you were connected to port 8888 on compute node r001, that command would look like
```
ssh -L 8888:l001.pvt.bil.psc.edu:8888 workshop.bil.psc.edu -l icaoberg
```
Here the localhost port is `8888`. After the first `:` comes the long name of the compute node, a colon, and the port where Jupyter is running. Here, that string is `l001.pvt.bil.psc.edu:8888`.
8. Open a browser window to connect to the Jupyter server. On the workshop VM machine, open a browser and point it to `http://localhost:8888`.

You will be prompted to enter a token to make the connection. Use the token given to when you started the Jupyter server on the BIL compute node (step 6 above).

9. When you are done, close your interactive session on BIL.
### Using Matlab
If you wish to use Matlab, then please request permission to use it first by filling this [form](https://www.psc.edu/resources/software/matlab/permission-form/).
After getting access you can use it with
```
module load matlab/2021a
matlab -nosplash
```

### Using ITK-SNAP
ITK-SNAP is a software application used to segment structures in 3D medical images.
```
module load itksnap/3.8.0
itksnap
```

### Installing and using SimpleITK
<img src="https://itk.org/Wiki/images/9/95/SimpleITK-SquareTransparentLogo.png" width="25%"/>
SimpleITK is a simplified, open-source interface to the Insight Segmentation and Registration Toolkit.
```
module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install numpy scipy SimpleITK
```
For example, to load an image as a numpy array
```
import SimpleITK as sitk
import numpy as np
file = '/bil/data/hackathon/2022_GYBS/data/subject/201606_red_mm_RSA.nii.gz'
image = sitk.ReadImage(file)
arr = sitk.GetArrayFromImage(image)
```
If you are using iPython you can confirm this
```
In [2]: whos
Variable Type Data/Info
-------------------------------
arr ndarray 1090x942x997: 1023699660 elems, type `uint8`, 1023699660 bytes (976.2760734558105 Mb)
file str /bil/data/hackathon/2022_<...>/201606_red_mm_RSA.nii.gz
image Image Image (0x55faa4652f50)\n <...> Capacity: 1023699660\n
np module <module 'numpy' from '/bi<...>kages/numpy/__init__.py'>
sitk module <module 'SimpleITK' from <...>s/SimpleITK/__init__.py'>
```
:::info
:bulb: If you are working in virtual environment you might also want to install other useful libraries like
```
pip install matplotlib scipy pandas
```
:::
### Installing ITK
<img src="https://itk.org/wp-content/uploads/2019/10/ITK_Logo_Large-300x143.png" />
ITK is an open-source, cross-platform library that provides developers with an extensive suite of software tools for image analysis.
```
module load anaconda3/4.11.0
pip install --user itk
```
or
```
module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install itk
```
For a quick guide to ITK please visit [here](https://itkpythonpackage.readthedocs.io/en/master/Quick_start_guide.html).
For example, to read an image
```
import itk
import numpy as np
# Read input image
file = '/bil/data/hackathon/2022_GYBS/data/subject/201606_red_mm_RSA.nii.gz'
itk_image = itk.imread(file)
# View only of itk.Image, pixel data is not copied
np_view = itk.array_view_from_image(itk_image)
```
If you are using iPython you can confirm this
```
Variable Type Data/Info
---------------------------------------
file str /bil/data/hackathon/2022_<...>/201606_red_mm_RSA.nii.gz
itk LazyITKModule <module 'itk' from '/bil/<...>ackages/itk/__init__.py'>
itk_image itkImageUC3 Image (0x5653b5da85f0)\n <...> Capacity: 1023699660\n
np module <module 'numpy' from '/bi<...>kages/numpy/__init__.py'>
np_view NDArrayITKBase [[[0 0 0 ... 0 0 0]\n [0<...>0]\n [0 0 0 ... 0 0 0]]]
```
:::info
:bulb: If you are working in virtual environment you might also want to install other useful libraries like
```
pip install matplotlib scipy pandas
```
:::
### Installing nibabel
<img src="https://nipy.org/nibabel/_static/nibabel-logo.svg" /><br>
Read/write access to some common neuroimaging file formats.
```
module load anaconda3/4.11.0
pip install --user nibabel
```
or
```
module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install nibabel
```
### Installing cwltool
<img src="https://repository-images.githubusercontent.com/43816051/ba006580-04d7-11eb-9bab-c6463ba5022b" width="30%" />
This is the reference implementation of the Common Workflow Language.
```
module load anaconda3/4.11.0
pip install --user cwltool cwlref-runner
```
or
```
module load anaconda3/4.11.0
python -m venv .
source ./bin/activate
pip install cwltool cwlref-runner
```
### Installing `spyder`
Spyder is a free and open source scientific environment written in Python, for Python, and designed by and for scientists, engineers and data analysts.
```
export QT_XCB_GL_INTEGRATION=none
module load anaconda3/4.11.0
python3 -m venv .
source ./bin/activate
pip install spyder
spyder
```

## Examples
### Example 1. Load and combine images in Fiji
#### Loading the proper module
Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.
```
module load fiji
fiji
```
The first time running these commands, the system will install Fiji in your home directory.
:::info
:bulb: If the font size in your terminal is too small, you can press CTRL and + to increase the font size.

:::
After running the commands above, a toolbar should appear in your screen, similar to the picture below

#### Update Fiji (optional)

and then update the plugins

#### Loading the first channel
On the "(Fiji Is Just) ImageJ" window select menu
```
[PLUGINS]->[BIOFORMATS][BIOFORMATS-IMPORTER]
```

Click on FILE SYSTEM in the PLACES sidebar and navigate to
```
/bil/workshops/2021/data_submission/data/fiji/stitchedImage_ch1
```

then click on
```
StitchedImage_Z001_L001.jp2
```

On the "Bio-Formats Input Options" popup select

-View Stack with Hyperstack
-Group files with similar names
-Color mode: Custom
-Click [OK]
On the "Bio-Formats File Stitching" popup select

-Axis 1 number of images enter 5
-Axis 1 axis first image enter 1
-Axis 1 axis increment enter 54
-Click [OK]
On the "Bio-Formats Series Options" popup select

-Series 1 (8557x11377)
-Click [OK]
On the “Bio-Formats Custom Colorization” popup select

-Series 0 channel 0 Red 255
-Click [OK]
#### Loading the second channel
We will follow a similar procedure as the first channel
On the "(Fiji Is Just) ImageJ" window select
```
[PLUGINS]->[BIOFORMATS][BIOFORMATS-IMPORTER]
```

Click on FILE SYSTEM in the PLACES sidebar and navigate to
```
/bil/workshops/2021/data_submission/data/fiji/stitchedImage_ch2
```
then click on
```
StitchedImage_Z001_L001.jp2
```
On the "Bio-Formats Input Options" popup select

-View Stack with Hyperstack
-Group files with similar names
-Do Not Use virtual stack
-Click [OK]
On the "Bio-Formats File Stitching" popup select

-Axis 1 number of images enter 5
-Axis 1 axis first image enter 1
-Axis 1 axis increment enter 54
-Click [OK]
On the "Bio-Formats Series Options" popup select
-Series 1 (8557x11377)
-Click [OK]
On the “Bio-Formats Custom Colorization” popup select

-Series 0 channel 0 Red 255
-Click [OK]
#### Merge 2 channels into one

On the "(Fiji Is Just) ImageJ" window select


```
[IMAGE]->[COLOR]->[MERGE-CHANNELS]
```
-For C1 (red) select the first item
-For C2 (green) select the second item
-Click [OK]

#### Adjust Brightness/Contrast
On the "(Fiji Is Just) ImageJ" window select

```
[IMAGE]->[ADJUST]->[BRIGHTNESS/CONTRAST]
```
On the “B&C” popup window

-Set brightness to max
-Set contrast to max
-Click on [Set] button
On “Set Display Range” Popup window

-Set min=0
-Set max=1500
-Check propagate to all other open windows
-Click [OK]

#### View Composite Z stack and zoom

On the "Composite" window:
-Move the "Z" slider slowly to the right/left to view the Z stack.
-Move the mouse cursor (+) to the top-left of an area that is interesting.
-While clicking the left mouse button drag the selection box to the right and down.
-Move the mouse cursor to the center of the box.
-Click on the plus key to zoom in, - to zoom out
-To get back to the original resolution, move the mouse cursor to outside the selection box. Click on the right mouse button. Select "Original Scale"
#### To Save the combined-channel images
On the "(Fiji Is Just) ImageJ" window select

```
[FILE]->[SAVE AS]->[IMAGE SEQUENCE]
```
On the "Save Image Sequence" popup set values

-Format: TIFF
-Click [OK]
On the "Save Image Sequence" popup set values
-Set DIR - to someplace where you can save (e.g. /bil/home/$USER or your Desktop)
-Set the name
-Click [OK]
#### To make an animated thumbnail
On the "(Fiji Is Just) ImageJ" window select

```
[Image]->[Type]->[RGB Color]
```
On the "Convert to RGB" window select

-Slices(5)
-Keep Source
-Click [OK]
The Image now needs to be downsized.
On the "(Fiji Is Just) ImageJ" window select

```
[IMAGE]->[SCALE]
```
-Delete the "Width (pixels)" value and replace with the value 480.
-The Height should automatically be set at 638.
-Set the Title to be "Composite-small"
-Click [OK]

The next step is to save the reduced size animated thumbnail.
On the "(Fiji Is Just) ImageJ" window select

```
[IMAGE]->[STACKS]->[ANIMATIONS]->[ANIMATION OPTIONS]
```
On the “Animation Options” popup set

-Speed to 2
-Click [OK]
On the "(Fiji Is Just) ImageJ" window select

```
[FILE]->[SAVE AS]->[GIF]
```
On the "Save as GIF" popup, select a directory and filename then click save

If you want to check out the saved gif file, then double click on the file in your desktop to open it with the default viewer.
Close Fiji.
### Example 2. Contrast-stretching with ImageMagick
This exercise is trying to tie up together all the concepts discussed in this workshop.
Imagine we are interested in collection `84c11fe5e4550ca0` that I found in the portal

:::info
:bulb: There is no need to download the data locally because the data is available when you our resources.
:::

*I can navigate to `/bil/data/84/c1/84c11fe5e4550ca0/` to see the contents of the collection.*
Unbfortuntaly it is difficult to visually inspect the images because these are not contrast stretched.

*The images are not contrast stretched and cannot be visually inspected.*
Fortunately there are tools like Fiji that can contrast stretch the images. However I want to do this in batch mode as a job since this process can be automated.

[ImageMagick](https://imagemagick.org/index.php) is a robust library for image manipulation. The `convert` tool in this library has an option to [contrast-stretching](https://imagemagick.org/script/command-line-options.php#contrast-stretch).
The format is
```
convert <input-file> -contrast-stretch <output-file>
```
Next I will create a file called `script.sh` and will place it in a folder in my Desktop.
```
#!/bin/bash
#this line is needed to be able to use modules on the compute nodes
source /etc/profile.d/modules.sh
#this command loads the ImageMagick library
module load ImageMagick/7.1.0-2
#this for loop finds all the images in the sample folder and contrast-stretch
for FILE in /bil/data/84/c1/84c11fe5e4550ca0/SW170711-04A/*tif
do
convert $FILE -contrast-stretch 15% $(basename $FILE)
done
```
:::info
:bulb: For simplicity, you can find the script above in
```
/bil/workshops/2022/data_submission
```
to copy the script to your Desktop run the command in terminal
```
cp /bil/workshops/2022/data_submission/script.sh ~/Desktop/
```
:::
Next I can submit my script using the command
```
sbatch -p compute --mem=64Gb script.sh
```
Since I am doing serially I don't need much memory but if I were to do this in parallel I might.
To monitor your job progress use the command `squeue -u <username>`. For example,
```
squeue -u icaoberg
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
14243 compute script.s icaoberg R 15:34 1 l001`
```
This leads to

### Example 3. vaa3D
##### Finding available tools
To list all available tools, run the command
```
module avail
----------------------------------------------------------- /bil/modulefiles -----------------------------------------------------------
anaconda/3.2019.7 c-blosc/1.19.0(default) knime/4.3.2 raw2ometiff/0.2.6(default)
anaconda3/4.9.2 fiji/1.53h lazygit/0.22.9 samtools/1.9(default)
aspera/3.9.6(default) htslib/1.9(default) md5deep/4.4 scala/2.13.5
bcftools/1.9(default) ilastik/1.3.3 openjpeg/2.3.0(default) singularity/3.7.0
bioformats/6.0.1 imagej-fiji/1.52p openslide/3.4.1 vaa3d/3.601
bioformats/6.1.1 java/jdk8u201 p7zip/16.02 xxhash/0.8.0
bioformats/6.4.0 java/jdk8u211 picard/2.20.2(default)
bioformats/6.5.1(default) java/jdk8u241(default) R/3.5.1
bioformats2raw/0.2.4(default) julia/1.0.5 R/3.6.3
````
To see all the installed versions of a specific package, e.g. java, run the command
```
module avail java
----------------------------------------------------------- /bil/modulefiles -----------------------------------------------------------
java/jdk8u201 java/jdk8u211 java/jdk8u241(default)
```
Running the module load command without specifying a version will load the default version of the software. For example, running
```
module load java
```
will load Java JDK8u241.
#### Loading the proper module
Click the black terminal icon in the top left corner of your screen

When the terminal opens, type the following commands to load Fiji into your workspace.
```
module load vaa3d
vaa3d
```
Running the command above will start the tool

For explorations, you can find some examples in
```
/bil/workshops/2021/data_submission/data/vaa3d
```
### Example 4. Building a Singularity container
:::info
:bulb: You can find a vetted list of Singularity containers maintained by PSC, [here](https://github.com/pscedu/singularity).
:::
In this example we will build a Singularity container using ther remote builder on [SyLabs.io](http://sylabs.io).
Choose a location in your home directory, and run the following commands
:::warning
:warning: You might to setup your SSH key or GitHub personal access token to clone the repo below. For more information click [here](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
:::
```=
git clone git@github.com:pscedu/singularity-bioformats2raw.git
cd singularity-bioformats2raw/3.0.0
singularity build --remote bioformats2raw.sif Singularity
```
This will create a local Singularity image file named `bioformats2raw.sif`.
Since you have access to the definition file
```
Bootstrap: docker
From: debian:stretch
%labels
AUTHOR icaoberg
EMAIL icaoberg@psc.edu
SUPPORT help@psc.edu
WEBSITE http://github.com/icaoberg/singularity-bioformats2raw
COPYRIGHT Copyright © 2021 Pittsburgh Supercomputing Center. All Rights Reserved.
VERSION 3.0.0
%post
apt update
apt install -y libblosc1 wget unzip default-jdk
cd /opt/
wget -nc https://github.com/glencoesoftware/bioformats2raw/releases/download/v0.3.0/bioformats2raw-0.3.0.zip
unzip bioformats2raw-0.3.0.zip && rm -f bioformats2raw-0.3.0.zip
ln -s /opt/bioformats2raw-0.3.0/bin/bioformats2raw /usr/local/bin/bioformats2raw
apt remove -y wget unzip
apt clean
```
you can tell the binary `bioformats2raw` is available in the container.
To use it, run
```=
module load java
singularity exec -B /bil bioformats2raw.sif bioformats2raw --help
Missing required parameters: '<inputPath>', '<outputLocation>'
Usage: <main class> [-p] [--no-hcs] [--[no-]nested] [--no-ome-meta-export]
[--no-root-group] [--overwrite]
[--use-existing-resolutions] [--version] [--debug
[=<logLevel>]] [--extra-readers[=<extraReaders>[,
<extraReaders>...]]]... [--options[=<readerOptions>[,
<readerOptions>...]]]... [-s[=<seriesList>[,
<seriesList>...]]]...
[--additional-scale-format-string-args=<additionalScaleForma
tStringArgsCsv>] [-c=<compressionType>]
[--dimension-order=<dimensionOrder>]
[--downsample-type=<downsampling>]
[--fill-value=<fillValue>] [-h=<tileHeight>]
[--max_cached_tiles=<maxCachedTiles>]
[--max_workers=<maxWorkers>]
[--memo-directory=<memoDirectory>]
[--pixel-type=<outputPixelType>]
[--pyramid-name=<pyramidName>] [-r=<pyramidResolutions>]
[--scale-format-string=<scaleFormatString>]
[-w=<tileWidth>] [-z=<chunkDepth>]
[--compression-properties=<String=Object>]...
[--output-options=<String=String>[\|<String=String>...]]...
<inputPath> <outputLocation>
<inputPath> file to convert
<outputLocation> path to the output pyramid directory. The given
path can also be a URI (containing ://) which
will activate **experimental** support for
Filesystems. For example, if the output path
given is 's3://my-bucket/some-path' *and* you
have an S3FileSystem implementation in your
classpath, then all files will be written to S3.
--additional-scale-format-string-args=<additionalScaleFormatStringArgsCsv>
Additional format string argument CSV file
(without header row). Arguments will be added
to the end of the scale format string mapping
the at the corresponding CSV row index. It is
expected that the CSV file contain exactly the
same number of rows as the input file has series
-c, --compression=<compressionType>
Compression type for Zarr (null, zlib, blosc;
default: blosc)
--compression-properties=<String=Object>
Properties for the chosen compression (see https:
//jzarr.readthedocs.io/en/latest/tutorial.
html#compressors )
--debug, --log-level[=<logLevel>]
Change logging level; valid values are OFF, ERROR,
WARN, INFO, DEBUG, TRACE and ALL. (default: WARN)
--dimension-order=<dimensionOrder>
Override the input file dimension order in the
output file [Can break compatibility with
raw2ometiff] (XYZCT, XYZTC, XYCTZ, XYCZT, XYTCZ,
XYTZC)
--downsample-type=<downsampling>
Tile downsampling algorithm (SIMPLE, GAUSSIAN,
AREA, LINEAR, CUBIC, LANCZOS)
--extra-readers[=<extraReaders>[,<extraReaders>...]]
Separate set of readers to include; (default:
[class com.glencoesoftware.bioformats2raw.
PyramidTiffReader, class com.glencoesoftware.
bioformats2raw.MiraxReader])
--fill-value=<fillValue>
Default value to fill in for missing tiles (0-255)
(currently .mrxs only)
-h, --tile_height=<tileHeight>
Maximum tile height to read (default: 1024)
--max_cached_tiles=<maxCachedTiles>
Maximum number of tiles that will be cached across
all workers (default: 64)
--max_workers=<maxWorkers>
Maximum number of workers (default: 4)
--memo-directory=<memoDirectory>
Directory used to store .bfmemo cache files
--no-hcs Turn off HCS writing
--[no-]nested Whether to use '/' as the chunk path seprator
(true by default)
--no-ome-meta-export Turn off OME metadata exporting [Will break
compatibility with raw2ometiff]
--no-root-group Turn off creation of root group and corresponding
metadata [Will break compatibility with
raw2ometiff]
--options[=<readerOptions>[,<readerOptions>...]]
Reader-specific options, in format key=value[,
key2=value2]
--output-options=<String=String>[\|<String=String>...]
|-separated list of key-value pairs to be used as
an additional argument to Filesystem
implementations if used. For example,
--output-options=s3fs_path_style_access=true|...
might be useful for connecting to minio.
--overwrite Overwrite the output directory if it exists
--pixel-type=<outputPixelType>
Pixel type to write if input data is float or
double (int8, int16, int32, uint8, uint16,
uint32, float, double, complex, double-complex,
bit)
--pyramid-name=<pyramidName>
Name of pyramid (default: null) [Can break
compatibility with raw2ometiff]
-r, --resolutions=<pyramidResolutions>
Number of pyramid resolutions to generate
-s, --series[=<seriesList>[,<seriesList>...]]
Comma-separated list of series indexes to convert
--scale-format-string=<scaleFormatString>
Format string for scale paths; the first two
arguments will always be series and resolution
followed by any additional arguments brought in
from `--additional-scale-format-string-args`
[Can break compatibility with raw2ometiff]
(default: %d/%d)
--use-existing-resolutions
Use existing sub resolutions from original input
format[Will break compatibility with raw2ometiff]
-w, --tile_width=<tileWidth>
Maximum tile width to read (default: 1024)
-z, --chunk_depth=<chunkDepth>
Maximum chunk depth to read (default: 1)
-p, --progress Print progress bars during conversion
--version Print version information and exit
```
:::info
:bulb: The flag `-B /bil` is important, please use it everytime you run a container in the Brain Image Library Systems.
:::
To run the application in the container simply run
```=
module load java
singularity exec -B /bil bioformats2raw.sif bioformats2raw /bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz raw/
```
also try
```=
module load java
singularity exec -B /bil bioformats2raw.sif bioformats2raw /bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz raw2/ --resolutions 6
```
The command above will convert the image `/bil/data/hackathon/2022_GYBS/lightsheet/subject/subject0_25.nii.gz` to a Zarr file format in the folder `raw/`.
# Gentle intro to cwltool
<img src="https://repository-images.githubusercontent.com/43816051/ba006580-04d7-11eb-9bab-c6463ba5022b" width="30%" />
## Before we begin
<iframe width="560" height="315" src="https://www.youtube.com/embed/86eY8xs-Vo8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Watch the video above for a gentle intro.
## Benefits
There are many benefits to using workflows, mainly their portability. If built flexible, a workflow should be able to be deployed locally, e.g. a laptop, on an HPC cluster, e.g. Brain Image Library or Bridges 2 and in the cloud, e.g. AWS.
The second benefit is workflows are very efficient at connecting tools or programs and scripts written in different languages.
Third and last, workflows can use containers that can be push to repositories like DockerHub, making them truly portable and flexible.
:::info
Using workflows is just one way of running pipelines on the Brain Image Library hardware, users can use traditional approaches like bash or Python scripts to run their workflows.
:::
## Introduction
### Installing cwltool
```
module load anaconda3/4.11.0
pip install --user cwltool cwlref-runner
```
[`cowsay`](https://en.wikipedia.org/wiki/Cowsay) is a tool that pretty prints a cow.
<img src="https://upload.wikimedia.org/wikipedia/commons/8/80/Cowsay_Typical_Output.png" /><br>
Traditionally we would install the tool locally either using a repository or pip.
For example,
```
pip install cowsay
```
will install the binary in our system.
Since the only input to `cowsay` is a string, a basic CWL workflow document looks like this
```
#!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: CommandLineTool
baseCommand: cowsay
inputs:
message:
type: string
inputBinding:
position: 1
outputs: []
```
CWL documents are written either in YAML or JSON. For example, we can create the file `message.cwl`
```
message: Hello world!
```
and use it as input for the workflow
```
cwltool cowsay.cwl message.cwl
INFO /bil/packages/anaconda3/4.11.0/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay.cwl'
INFO [job cowsay.cwl] /tmp/l7knmpt3$ cowsay \
'Hello world!'
____________
| Hello world! |
============
\
\
^__^
(oo)\_______
(__)\ )\/\
||----w |
|| ||
INFO [job cowsay.cwl] completed success
{}
INFO Final process status is success
```
## `cowsay` on Docker
:::warning
This step cannot run on Brain Image Library hardware since we do not support Docker.
:::
Consider the following `Dockerfile`
```
FROM ubuntu:latest
RUN apt-get update && apt-get install -y cowsay --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV PATH $PATH:/usr/games
CMD ["cowsay"]
```
The file above creates a container with the `cowsay` binary. I can be built it using the command
```
docker build -t icaoberg/cowsay .
```
and pushed to DockerHub using the command
```
docker push icaoberg/cowsay
```
This is a dummy example but technically now there exists a container with my tool. Now, I can recycle the CWL workflow from before and it to get the container from the repo by adding the lines
```
hints:
DockerRequirement:
dockerPull: icaoberg/cowsay
```
The document now looks like
```
#!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: CommandLineTool
requirements:
SubworkflowFeatureRequirement: {}
hints:
DockerRequirement:
dockerPull: icaoberg/cowsay
baseCommand: cowsay
inputs:
message:
type: string
inputBinding:
position: 1
outputs: []
```
and running it will produce the same results as the previous example
```
cwltool cowsay2.cwl message.cwl
INFO /Users/icaoberg/opt/anaconda3/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay2.cwl' to 'file:///Users/icaoberg/Documents/code/singularity-cowsay/3.04/cowsay2.cwl'
INFO [job cowsay2.cwl] /private/tmp/docker_tmpr_rjhbrj$ docker \
run \
-i \
--mount=type=bind,source=/private/tmp/docker_tmpr_rjhbrj,target=/xJHVRn \
--mount=type=bind,source=/private/tmp/docker_tmpzpu0ulbd,target=/tmp \
--workdir=/xJHVRn \
--read-only=true \
--user=501:20 \
--rm \
--cidfile=/private/tmp/docker_tmp07wk4ale/20220309145946-550721.cid \
--env=TMPDIR=/tmp \
--env=HOME=/xJHVRn \
icaoberg/cowsay \
cowsay \
'Hello world!'
______________
< Hello world! >
--------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
INFO [job cowsay2.cwl] Max memory used: 0MiB
INFO [job cowsay2.cwl] completed success
{}
INFO Final process status is success
```
:::info
Even though we do not support Docker, you can try installing [`uDocker`](https://github.com/indigo-dc/udocker).
:::
## `cowsay` on Singularity
The main issue is that most HPC clusters do not support Docker and prefer Singularity or Apptainers. However, if the Docker image in DockerHub has proper entrypoints, then you could simply use the `--singularity` option to ask CWL tools to convert the Docker image to Singularity.
:::warning
If the Docker image does not a proper entry point this step might fail if you are not aware of how the image was built.
Only use vetted images or public images whose Dockerfile you have seen and trust.
:::
Using the option
```
cwltool --singularity cowsay2.cwl message.cwl
```
will run the workflow
```
cwltool --singularity cowsay2.cwl message.cwl
INFO /bil/packages/anaconda3/4.11.0/bin/cwltool 3.1.20220210171524
INFO Resolved 'cowsay2.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay2.cwl'
INFO ['singularity', 'pull', '--force', '--name', 'icaoberg_cowsay.sif', 'docker://icaoberg/cowsay']
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob 7c3b88808835 done
Copying blob 6b7a6ea66907 done
Copying config 063d227371 done
Writing manifest to image destination
Storing signatures
2022/03/09 15:17:27 info unpack layer: sha256:7c3b88808835aa80f1ef7f03083c5ae781d0f44e644537cd72de4ce6c5e62e00
2022/03/09 15:17:28 info unpack layer: sha256:6b7a6ea669076a74f122534da10e4e459f36777854e8e1529564d31c685fd9ea
INFO: Creating SIF file...
INFO [job cowsay2.cwl] /tmp/ceztlkix$ singularity \
--quiet \
exec \
--contain \
--ipc \
--cleanenv \
--pid \
--home \
/tmp/ceztlkix:/qxOGcV \
--bind \
/tmp/o9hfm5tx:/tmp \
--pwd \
/qxOGcV \
/bil/users/icaoberg/code/singularity-cowsay/3.04/icaoberg_cowsay.sif \
cowsay \
'Hello world!'
______________
< Hello world! >
--------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
INFO [job cowsay2.cwl] completed success
{}
INFO Final process status is success
```
but will create a Singularity image file on disk.
### Adding more options
`cowsay` has more options than just the input string.
```
cowsay(6) Games Manual cowsay(6)
NAME
cowsay/cowthink - configurable speaking/thinking cow (and a bit more)
SYNOPSIS
cowsay [-e eye_string] [-f cowfile] [-h] [-l] [-n] [-T tongue_string] [-W column] [-bdg‐
pstwy]
```
A `cowfile` is used to change the picture. For example, running the command
```
➜ code cowsay -f flaming-sheep "Hello World\!"
______________
< Hello World! >
--------------
\ . . .
\ . . . ` ,
\ .; . : .' : : : .
\ i..`: i` i.i.,i i .
\ `,--.|i |i|ii|ii|i:
UooU\.'@@@@@@`.||'
\__/(@@@@@@@@@@)'
(@@@@@@@@)
`YY~~~~YY'
|| ||
```
will print a flaming sheep.
In this example, we will expose the `[-f cowfile]` argument by adding the lines
```
format:
type: string
inputBinding:
position: 1
prefix: -f
default: "flaming-sheep"
```
to the input block, making the workflow look like
```
#!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: CommandLineTool
requirements:
SubworkflowFeatureRequirement: {}
hints:
DockerRequirement:
dockerPull: icaoberg/cowsay
baseCommand: "cowsay"
inputs:
message:
type: string
inputBinding:
position: 2
format:
type: string
inputBinding:
position: 1
prefix: -f
default: "flaming-sheep"
outputs: []
```
Then you can run it
```
05:03:16 icaoberg@workshop 3.04 ±|master ✗|→ cwltool --singularity cowsay3.cwl message3.cwl
INFO /bil/users/icaoberg/.local/bin/cwltool 3.1.20220224085855
INFO Resolved 'cowsay3.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/cowsay3.cwl'
INFO Using local copy of Singularity image found in /bil/users/icaoberg/code/singularity-cowsay/3.04
INFO [job cowsay3.cwl] /tmp/g2zknke0$ singularity \
--quiet \
exec \
--contain \
--ipc \
--cleanenv \
--pid \
--home \
/tmp/g2zknke0:/lzqerl \
--bind \
/tmp/fkb0phq4:/tmp \
--pwd \
/lzqerl \
/bil/users/icaoberg/code/singularity-cowsay/3.04/icaoberg_cowsay.sif \
cowsay \
-f \
flaming-sheep \
'Hello world!'
______________
< Hello world! >
--------------
\ . . .
\ . . . ` ,
\ .; . : .' : : : .
\ i..`: i` i.i.,i i .
\ `,--.|i |i|ii|ii|i:
UooU\.'@@@@@@`.||'
\__/(@@@@@@@@@@)'
(@@@@@@@@)
`YY~~~~YY'
|| ||
INFO [job cowsay3.cwl] completed success
{}
INFO Final process status is success
```
Keep in mind your input argument `message3.yml` now looks like this
```
message: Hello world!
format: flaming-sheep
```
You can choose to expose as many input arguments as you want or set default values.
:::warning
Anything below this line I am still working on
:::
## Mixing and matching
Consider the following workflow, `fortune.cwl`
```
#!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: CommandLineTool
requirements:
SubworkflowFeatureRequirement: {}
hints:
DockerRequirement:
dockerPull: grycap/cowsay
baseCommand: /usr/games/fortune
inputs: []
outputs: []
```
which does something like
```
cwltool --singularity fortune.cwl
INFO /bil/users/icaoberg/.local/bin/cwltool 3.1.20220224085855
INFO Resolved 'fortune.cwl' to 'file:///bil/users/icaoberg/code/singularity-cowsay/3.04/fortune.cwl'
INFO Using local copy of Singularity image found in /bil/users/icaoberg/code/singularity-cowsay/3.04
INFO [job fortune.cwl] /tmp/lppo005u$ singularity \
--quiet \
exec \
--contain \
--ipc \
--cleanenv \
--pid \
--home \
/tmp/lppo005u:/jzrcyZ \
--bind \
/tmp/l0squr2a:/tmp \
--pwd \
/jzrcyZ \
/bil/users/icaoberg/code/singularity-cowsay/3.04/grycap_cowsay.sif \
/usr/games/fortune
Q: How many lawyers does it take to change a light bulb?
A: You won't find a lawyer who can change a light bulb. Now, if
you're looking for a lawyer to screw a light bulb...
INFO [job fortune.cwl] completed success
{}
INFO Final process status is success
```
---
The Brain Image Library is supported by the National Institutes of Mental Health of the National Institutes of Health under award number R24-MH-114793. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.