owned this note
owned this note
Published
Linked with GitHub
---
title: nf-enthusiasts collaborative notes
tags: nextflow,notes,training
---
# nf-enthusiasts collaborative notes
🌯 Feed me your notes!
These have turned into Selina's personal notes, so while this feels powerful, please contribute :woman-bowing:
### Remind James to use box analogy :card_file_box:
:-)
### 28.10.22 Setting up a new pipeline
create conda environments on local machine and install nf-core tools
```bash
conda create -n nfcoretools bioconda::nf-core
conda activate nf-coretools
```
pick a folder
```bash
cd folder
```
install Nextflow
```bash
conda install bioconda::nextflow
```
create new nf-core pipeline from template
```bash
nf-core create
.. name genomeassembly
.. description
.. author
.. customise yes
.. prefix aidaanva
.. skip template areas # nf-core configs make pipelines accessible to other clusters
```
creates new directory with assets (test sample_sheet), bin (runnable scripts), conf (nf-core default memory, outputs, etc.), docs (documentation), lib (background stuff), modules (local + nf-core modules installed with nf-core tools), subworkflows, workflows, main.nf (full pipeline script)
create input sheet
modify subworkflow/local/input_check.nf - converts sample sheet to read channels (included by default)
modify module/local/samplesheet_check.nf - checks input tsv structure and splits into meta map and variables)
modify bin/check_samplesheet.py - ask Thiseas
create test data - as small as possible & as big as necessary (fileupload limit github 100 MB), e.g. mammoth mito 10,000 reads for eager.
### 04.11.22 Input validation & adding modules
make test tsv according to required structure (sample_id, library_id, pairment, damange_treatment, r1, r2)
```bash
rmate test.tsv
```
test sample sheet input module
```bash
nextflow run main.nf --input test.tsv --outdir . -dump-channels
```
check available modules on github nf-core/modules, nf-core website https://nf-co.re/modules or in the console
```bash
nf-core modules list remote
```
pull already existing modules from nf-core
```bash
nf-core modules install modulename
```
can check input and output channels in meta.yml
add new module and channels to pipeline.nf
```bash
include { MODULENAME } from '/path'
MODULENAME (
input.channel.value, []
)
```
if you run into memory requirement issues, use test profile in conf/test.config -profile test,docker
### 11.11.22 AdapterRemoval output channel modification
Results of process can be emitted in different constellations
eg. as single channel:
```bash
tuple(R1, R2, singletons) emit reads
```
or as individual channels:
```bash
file(R1) emit R1
file(R2) emit R2
file(singletons) emit singletons
```
next process requires input: tuple(meta,[R1,R2],singletons)
```bash
Adapterremoval2.out
.mix(AR2.out.R1)
.mix(AR2.out.R2)
.mix(AR2.out.singletons)
.groupTuple # combine objects with same label into one [label,R1,R2,singletons]
.map{meta,R1,R2,singletons ->
[meta,[R1,R2],singletons]
} # reorder elements within object, can use any variable names
```
### 18.11.22 cancelled
### 25.11.22 yml creation for spades
spades need a yml file with library information of each sample which is created by a script
add script as subworkflow to main pipeline.nf
```bash
include { YML_CREATION_SPADES } from '../modules/local/yml_creation'
YML_CREATION_SPADES(ch_input_for_yml_creation)
```
nf-core specific: task.ext.prefix in module specifies the file name variation for each execution of the module (eg. sample name)
```bash
def prefix = task.ext.prefix ?: "${meta.id}"
```
flatten list to convert paired elements ({f1.1,f1.2},{f2.1,f2.2}) (or any list of lists) into "flat" list (f1.1,f1.2,f2.1,f2.2) during channel mapping
```bash
.map{
meta, pairs, singletons ->
def pairs2= pairs.flatten()
[meta, pairs2,singletons2]
}
```
### 02.12.22 cancelled
### 09.12.22 eager bwaaln review
read in parameters from command line/config
publishDir = final file in results folder, can be copied or simlinked or disabled
bwa samse and bwa sampe for single-end and paired-end data respectively = subworkflow (now importable as a module)
CI (continuous integration) test to check for breaking after changes
nextflow config = add default parameters
nextflow_schema.json = documentation for nf-core website
add to pre-existing file (e.g. versions file)
```bash
.mix
```
only take on a selection of output into the channel
```bash
.map{meta, fasta, fai, dict, index -> [meta, index]}
```
if you have several modules (e.g. different mappers), you want to switch between different runs, instead of many if else statements, you can implement as subworkflow
map to multiple references at once subworkflow
make sure each bam is mapped to each reference file even if they are output from previous process at different times
```bash
.combine(index) # combine each bam with each reference
.multiMap { # split existing channel into 2 synchronised subchannels
meta, reads, meta2, index ->
reads: [ meta, reads ]
index: [ meta, index ]
}
```
split input for samse/sampe depending on SE/PE
```bash
.join # combine based on meta map
.branch { # split channnel but doesn't consider order
meta, reads, sai ->
pe: !meta.single_end
se: meta.single_end
}
```
### 16.12.22 cancelled
### 23.12.22 Q&A
###### Initial pipeline planning
1. Is the process worth automating? - at least 3 modules, run relatively
2. Does this pipeline already exist?
3. Do containers exist on conda/etc.? Which exist already on modules?
4. What modules do you need to create?
5. What dependencies do you need?
6. What input and output for each part of the pipeline?
7. Should parts be split into subworkflows? (same input & output)
:pencil: Drawing lines & boxes :black_square_button: :arrow_right: :black_square_button: :arrow_right: :black_square_button: :arrow_right: :black_square_button:
e.g. Developing FuncScan https://hackmd.io/@GKVm6YxRSSsS-BTg2auXA/BkhEu3V1j
nf-core walkthrough "Adding pipelines" https://nf-co.re/docs/contributing/addingpipelines
### 06.01.2023 cancelled
### 13.01.2023 Deduplication subworkflow
input:
ch_bam_bai channel
fasta
fasta_fai
build_interval.nf - chromosome list per reference
create second meta with chromosome list and combine by first element in channel, change reference to id, multimap, combine regions with bam_bai channel in pattern samtools expects ....
deduplicate bam files
merge bam files together (drop genomic region from meta, group by meta -> group list of bams for genomic region, merge with samtools, index, flagstats)
output:
merged bam & bai, stats
if statement to use markduplicates instead of dedup
### 27.01. Deduplication pull request explanation
###### eager.nf
check if using dedup without read merging
multimap fasta and fai to all samples in order, split fasta and fai channels
if skipping deduplication use input bams as "dedup" bams
###### map.nf
merge by lane and merge by sample, meta without colour chemistry, etc.
###### deduplicate.nf subworkflow
ch_bam_bai (meta, bam, bai) one per sample
ch_fai --> bed --> (meta_ref, bed) one per reference # map new meta (ID of reference) to allow deduplication after multimapping
build_intervals(fasta_fai) - local module = awk command printing list of chromosomes for each reference to speed up deduplication
combine( # add reference bed to each sample
by:0 #like join
Build_intervals.out.bed.map
)
map --> meta, bam, bai, bed for each sample and chromosome
markduplicates (input: meta + bam, fasta, fai)
combine # add fasta and fai for each reference and sample/chromosome
multimap to required input channels
dedup each bam
dedup (doesn't need reference as input)
merge deduplicated chromosomes
clone meta without chromosome name
group by remaining meta information
copy reference out of meta and merge by reference
samtools_merge all chromosome bams
samtools_sort and index merged bams
samtools_flagstat
emit bams, bais, flagstat output
### 03.02. DeDuplication pull request review
###### modules.config
user can modify module parameters
ext.args # command line flags
ext.prefix # file prefix
###### docs/output.md
Document as you go along :female-police-officer:
###### modules.json
local versions of nf-core modules - should work fine
###### modules/local/.nf
easier if it follows nf-core structure
enable=conda has now been included in nextflow, remove from modules
###### nextflow.config
add any extra parameters
###### nextflow_schema.json
parameter documention
dropdown rendered automatically
###### subworkflows/local/deduplication.nf
no nesting functions - don't be afraid of new channels, whitespaces before {}, indent .map() options
only pick up first time calling of version
###### workflows/eager.nf
integrate new workflows into eager
if workflow can be skipped, need to create an empty channel
### 10.02.23 AuthentiCT bioconda recipe
[James' tutorial](https://hackmd.io/sJDtHEYOSxmjoKQB2gj9pg)
recipe: yaml file with path of source code, dependencies and tests
bioconda recipes hosted on github bioconda/bioconda-recipes
fork bioconda-recipes, make new branch "tool name" and clone bioconda-recipes repository to computer
```bash
conda install conda-build
```
new folder under recipes (tool name in lower case)
in tool name folder make file 'meta.yaml' (only use 2 spaces, not indents)
```yaml
{% set version = "1.0.0" %} # version of tool
package:
name: authentict
version: {{ version }}
source:
url: # url to source code tar.gz
- https://github.com/StephanePeyregne/AuthentiCT/releases/tag/{{ version }}
sha526: # md5sum/sha25sum hash
build:
number: 0 # build number - version of conda recipe
noarch: python # depends on the tool
script: '$PYTHON -m pip install . --no-deps --ignore-installed --no-cache-dir -vvv' # command to compile software, depends on software
requirements: # dependencies
host:
- python >=3.6
- pip
- cython
run: # check documentation and code of tool, add version but not too strict, can specify where to pull from (conda-forge/bioconda/etc.)
- python >=3.6
- pandas >=0.25.1
- scipy >=1.3
- numpy >=1.17.2
- numdifftools >=0.9.39
- random
tests: # must exit with exit code 0
commands:
- AuthentiCT —help
about:
home: https://github.com/StephanePeyregne/AuthentiCT # url of original tool
license: GPL3 # redistribution license
summary: 'estimate the proportion of present-day DNA contamination in ancient DNA datasets generated from single-stranded libraries' # short description for bioconda website
license_file: LICENCE # license file from tool repository
extra:
recipe.maintainers: # who made the recipe
-
identifiers: # doi
- doi:
```
test locally before submitting to bioconda, use mamba for faster building
```bash
conda-build recipes/authentict
# prints a lot of text into the console
# builds software
# installs environment
# runs tests
```
add to bioconda-recipes with pull request
some tools need extra compile files (eg. C++) in 'build.sh', can sometimes patch the source code to allow compilation (e.g. fix hard-coded variables)
bioconda-recipes sends recipes to Azure cloud and perform linting checks
if all checks passed, add please review & merge tag
every bioconda recipe is now automatically generated as docker and singularity container (biocontainers.pro), can also manually request tools on conda-forge or multitool containers via biocontainer github repositories, will automatically add container to [quay.io](http://quay.io) (if it fails, ask on biocontainer gitter) and [depot.galaxyproject.org/singularity](http://depot.galaxyproject.org/singularity) (takes 24 hrs)
conda-forge ([conda-forge/staged-recipes](https://github.com/conda-forge/staged-recipes)) similar process but not bioinformatic-specific & especially for R packages on CRAN
GraySkull can build recipes from CRAN and pypi (github.com/conda/grayskull)
### 17.02.2023 nf-core/eager3 module building
###### angsD module test
Load required files for angsd module from [nf-core/test-datasets](https://github.com/nf-core/test-datasets) instead of angsd repository to avoid breaking due to file name or path changes
nf-core/test-datasets works on different branches, e.g. github.com/nf-core/test-datasets/tree/modules/data/delete_me/angsd
###### Module building
[dsl2_modules_tutorial](https://nf-co.re/developers/tutorials/dsl2_modules_tutorial)
software environment: conda (environment with nf-core tools and nextflow)
```bash
conda create -n nf-core -c bioconda nextflow nf-core=2.7
```
* Recomendation: Install mamba for faster environment resolving
a) Fork nf-core/modules (name fork nf-core-module)
b) Clone repository by ssh
* Recommendation: set up ssh connection to github
```bash
# find ssh key generated for cluster
cat ~/id_rsa.pub
# copy to github/Settings/SHH and GPG keys
```
c) Open new issue on [nf-core/modules](https://github.com/nf-core/modules) for each new module (uppercase)
* different module for each subcommand, if applicable (e.g. samtools view, AuthentiCT deam2cont)
d) Coding in Visual Studio Code
- Make a new branch (+) name in lowercase
```bash
nf-core --help
nf-core modules create
- modules
- enter default
- name: tool/subtool
- Process resource label: process_single # depends on cpus required or threading possible
- meta?
# loading container often fails, ignore
````
e) Add container manually to **authentict/deam2count/main.nf**
* Go to depot.galaxyproject.org/singularity
* docker creation instantaneously but singularity will take over a night after bioconda recipe creation
* copy & paste container links
f) Set inputs and outputs
```groovy
input:
tuple val(meta), path(bam)
tuple val(meta2), path(config)
tuple val(meta3), path(positions)
```
* Name outputs to match with the content (tsv, txt...)
```groovy
output:
tuple val(meta), path("*.txt"), emit: txt
path "versions.yml" , emit: versions
```
g) Script
* Use double backlash for escape the bash backlash in Nextflow \\\
* command line parameters that do not require input files loaded with `$args` from config file
```groovy
script:
def args = task.ext.args ?: '' // load additional parameters
def prefix = task.ext.prefix ?: "${meta.id}" // sample/file name
def VERSION = '1.0.0' // WARN: Version information not provided by tool on CLI. Please update this string when bumping container versions. //hard-coded, use only if no version output command included in tool
def config_file = config ? "-c ${config}" : "" // optional input for AuthentiCT, load only if provided
def positions_file = positions ? "-p ${positions}" : "" // optional input for AuthentiCT, load only if provided
"""
AuthentiCT \\
deam2cont \\
$args \\
$config_file \\
$positions_file \\
$bam \\
> ${prefix}.txt
cat <<-END_VERSIONS > versions.yml
"${task.process}":
authentict: $VERSION
END_VERSIONS
"""
```
h) **authentict/deam2count/meta.yml**
* description: ideally github copy paste
* Describe each input and output
```groovy
input: //example
- config:
type: file
description: AuthentiCT configuration text file
pattern: "*"
output: //example
- txt:
type: file
description: Maximum likelihood estimates with associated standard errors
pattern: "*.txt"
```
i) test module output
```bash
nf-core modules create-test-yml MODULENAME
```
next time:
1. test AuthentiCT/deam2cont module
2. Add angsd to pipeline [Adding to Pipeline Tutorial](https://nf-co.re/docs/contributing/tutorials/adding_modules_to_pipelines)