
# Hackathon November 2020
---
<!-- Put the link to this slide here so people can follow -->
slides: https://hackmd.io/@Hackathon-November-2020/B1HgcJk5v
---
# Project 1: From Snakemake to Nextflow
---
## Primary focus
Convert an existing Snakemake pipeline in a Nextflow pipeline:
- equivalence between snakemake and nextflow functionnalities
- other advices on nextflow
---
## People involved
### Proposed by
- Maria Bernard (INRAE)
- Mathieu Charles (INRAE)
### Group members
- Maria Bernard (INRAE, BovReg)
- Mathieu Charles (INRAE)
- Christophe Klopp (INRAE)
- Daniel Fisher (LUKE, BovReg)
- Jose Espinosa (CRG)
- Emilio Palumbo (CRG)
- Alessio Vignoli (CRG)
- Suzanna Jin (CRG)
---
# Nextflow(/Snakemake equivalence) technical questions
* **how to perform a dryrun**
snakemake option --dryrun
there is no nextflow equivalent, just run your workflow on toy dataset (that is a recommendation of nf-core)
* **how to deal with temporary files**
snakemake statement is `temp()` in the rules output files definition
In nextflow there is no equivalent system (removing temp file when they become useless). The adviced method is the play with scratch (have a look to the `process.scratch = true` statement), work and published folders. Or you can manually manage temporary files by defining cleaning processes (but take care to remove the actual files not only the symlink).
* **how to limit the number of jobs launched in parallel**
snakemake options is `--jobs`
In nextflow, the number of jobs is by default 100, to change it we need to use the nextflow workflow config file : nextflow.config and add a section `executor` : https://www.nextflow.io/docs/latest/config.html#scope-executor, and modify the `queueSize` attribute
```
executor {
name = 'sge'
queueSize = 200
pollInterval = '30 sec'
}
```
* **what about logging**
In snakemake use `--prinshellcmds` to have a verbose log off all launched shell.
In Nextflow there is .nextflow.log file which precised nextflow technical log information( number of process pass, cached, failed ...).
For actual shell command line, we need to use `nextflow log` command and precise a particular session to see what have been done.
* **how to deal with dependencies**
snakemake work (among other system) with conda by the rules section `conda`, or by using an `env.yaml` file that specifies conda dependencies.
In nextflow, there is a very similar system, we need to add conda statement in the process definition (https://www.nextflow.io/docs/latest/conda.html?highlight=conda#) and then use the `-with-conda` option in the nextflow run command line.
example:
write a my-env.yaml file, something like:
```
name: my-env
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- star=2.5.4a
- bwa=0.7.15
```
and use this yaml file in the process
```
process foo {
conda '/some/path/my-env.yaml'
'''
your_command --here
'''
}
```
there is different syntaxes to precise conda dependencies. You could also simple precise name of the conda dependencies separated by space. `conda 'bwa samtools multiqc'`
* **how to perform a conditionnal process**
We do not know how to do it with Snakemake.
Use the `when` statement on an input, like:
```
params.iter = [1,2,3]
iter_ch = Channel.fromList(params.iter)
// iter_ch could be output of an other process
process conditionnal_process {
input:
val x from iter_ch
when:
x <= 2
output:
stdout into result
"""
printf $x
"""
}
result.view()
```
* **Dynamic resources allocation**
In snakemake there is a global cluster configuration file that is used to allocate the resources for the used rules. That means, resources need to be allocated on a worse case estimation so that the rule does not fail (e.g. out of memory, out of time, etc.).
Nextflow, however, can change they requested resources on runtime/error code depending. E.g. if for one sample the allocated RAM is not enough, the process can be restarted with increased amount RAM, for details see:
https://www.nextflow.io/docs/latest/process.html?highlight=when#dynamic-computing-resources
* **workflow organisation**
In snakemake it's recommended to have a rules directory with small snakefiles that define rules and one snakefile that import rules and defines the workflow outputs
In Nextflow (in DSL1 ), everything is written in one file.
* **general differences : file manipulation**
We need to put file in channels for nextflow to understand its a file and not the string corresponding to the path of the file.
```
params.genome = 'genome.fa'
genome_ch = Channel.fromPath(params.genome)
process foo {
input:
file genome from genome_ch
}
```
or
```
params.genome = "$baseDir/genome.fa"
process foo {
input:
path genome from params.genome
}
```
!!! we need an absolute path, doesn't seem to work with a relative path
# Additional notes
* **Config profile to precise computing environment**
There is a dedicated cluster profile for the Genotoul cluster (INRAE, Toulouse, France)
documentation : https://github.com/nf-core/configs/blob/master/docs/genotoul.md
config file : https://github.com/nf-core/configs/blob/master/conf/genotoul.config
* **How to make a workflow nf-core compatible**
There is a command to help how to start a nf-core compatible workflow ( https://nf-co.re/tools#creating-a-new-workflow) : `nf-core create`
# STIP Nextflow implementation workflow
## Goal : Find regulatory SNP candidate (rSNP)
Does the SNP impact the sequence affinity with a Transcription Factor ? (likelihood the TF will bind with this sequence)
* How does it work
- step1:
We expect to find rSNP candidate in a 2kb window upstream of TSS start. We filter the SNP file, keeping only the corresponding SNP.
- step2:
For each filtered SNP, we create ref and alt sequences correponding to SNP ref and alt base and its surrounding sequences (+/- 14bp)
- step3:
We download the matrices from existing database (TRANSFAC,JASPAR, HOCOMOCO) and prepare them (PWM <-> PFM, ratio, distrib)
- step4:
For each filtered SNP, we compare the ref and alt sequences to each TF matrices
Does ref and/or alt sequence have a strong affinity with TF (putative TFBS)?
Does the SNP impact the putative TFBS ?
* inputs
- Genome reference file: Fasta and annotation in GTF file
- Variant file : in VCF format
- TFBS pattern matrices : directory of PWM files (one file per TFBS)
* outputs
- TSV file with regulatory variant, and metadata such as TFBS name, score of affinity, impacting score
## git repository
https://github.com/MathieuCharlesINRAE/Hackathon_NextFlow_Nov2020/tree/main
## nextflow question
- create of composite inputs from non paired files
in trainings we saw that we can create composite inputs, for example for pair end reads, which results of element composed of an id and a paire of file :
```
reads_ch = Channel.fromFilePairs('data/ggal/gut*_{1,2}.fq')
// and then in a process
set val(sample_id), file(sample_files) from reads_ch
```
which results in something like:
```
[gut, [/home/ec2-user/environment/data/ggal/gut_1.fq, /home/ec2-user/environment/data/ggal/gut_2.fq]]
```
How to obtain same composite input but with single end reads ? or for this pipeline something like
`[tfbs1,/path/to/tfbs1.pwm]`
Solution
```
reads_ch = Channel.fromPath('data/ggal/*_1.fq')
.map { file -> [ file.name.replace("_1.fq",""), file ] }
.view()
// it returns
// [lung, /home/ec2-user/environment/data/ggal/lung_1.fq]
// [gut, /home/ec2-user/environment/data/ggal/gut_1.fq]
// [liver, /home/ec2-user/environment/data/ggal/liver_1.fq]
//
// and then in a process
process echo {
echo true
input:
set val(sample_id), file(sample_file) from reads_ch
output:
stdout into result
script:
"""
echo $sample_id" sequence file is "$sample_file
"""
}
```
- how to bypass part of process executions
In this pipeline we have a preprocessing step which can be quite long
For each TFBS a affinitiy score distribution and score ratio distribution are computed.
Imagine a process that take as input one PFM file and return the two distribution.
For each TFBS the distributions files will be written in work/random_id/, with a different random_id for each process so each TFBS.
For a next execution of the pipeline, how could we indicate to nextflow to not resume this process for already existing distribution ?
solution 1
\- using a preprocessedFolder with the distributions files already computed
```
params.matrixPreprocessedFolder = "$baseDir/results/matrix_processed"
matrixPreprocessedFolder_ch = Channel.fromPath("${params.matrixPreprocessedFolder}/*.pfm")
.map { file -> [ file.name.replace(".pfm",""), file, "${params.matrixPreprocessedFolder}/" + file.name.replace(".pfm",".score_distrib"), "${params.matrixPreprocessedFolder}/" + file.name.replace(".pfm",".ratio_distrib") ] }
.view()
```
\- and then using a when statement ?
But how to test if a file exists ?
{"metaMigratedAt":"2023-06-15T15:48:10.478Z","metaMigratedFrom":"YAML","title":"Hackathon P1 - From Snakemake to Nextflow slides","breaks":true,"description":"View the slide with \"Slide Mode\".","contributors":"[{\"id\":\"51ee42f1-7906-4532-904a-5fb35327ff58\",\"add\":9114,\"del\":1754},{\"id\":\"06c2ddc6-0d4c-449b-9db2-7a7c3087b2c8\",\"add\":1797,\"del\":1079},{\"id\":\"7ca213e9-846a-4ede-906c-2fd23dff04dd\",\"add\":607,\"del\":1},{\"id\":\"dc580f85-324a-4b69-9c14-3d203e9f1488\",\"add\":1049,\"del\":338}]"}