[TOC]  登入節點`<username>@twnia3.nchc.org.tw`,在`/home/username`目錄下建立local based的conda環境,下載pb-human-wgs-workflow-snakemake工具。 使用者需要手動更改以下設定檔 - workflow/config.yaml - 更改只用CPU: `cpu_only: True` - workflow/variables.env > export PARTITION跟export ACCOUNT,分別輸入派送工作的節點與計畫編號 - 運算節點更改:`export PARTITION=ct56` - 計畫名稱更改:`export ACCOUNT=ENT11****` - 註解mem限制:`#export DEEPVARIANT_AVX2_CONSTRAINT='--constraint=avx512'` - workflow/profiles/slurm/config.yaml - 最大核心數:`max-threads: 56` - 執行工作數:`jobs: 50` - workflow/process_smrtcells.slurm.sh - workflow/process_sample.slurm.sh - workflow/process_cohort.slurm.sh >除了更改#SBATCH設置,使用者僅需在Job script加入`moudle load singularity`,其他工具皆會安裝在conda環境內 (`workflow/rules/envs/*yaml`)。 ``` #!/bin/bash #SBATCH -A ENT11**** #SBATCH -p ct56 #SBATCH -N 1 #SBATCH -n 1 #SBATCH --cpus-per-task 4 ml load libs/singularity/3.10.2 ``` ## 實際演練Best practice ### 下載Miniconda到~/home目錄下 1. 下載Miniconda到~/home/username目錄下 ``` wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh ``` Miniconda3 will now be installed into this location: `/home/blossom2023/miniconda3` 2. 確認~/.bashrc的__conda_setup路徑已變更為 `/home/blossom2023/miniconda3/bin/conda` ``` source ~/.bashrc ``` 設定`vi ~/.condarc` 不優先執行base `conda config --set auto_activate_base false` ``` channel_priority: flexible auto_activate_base: false ``` **然後重新登入** - 安裝完Miniconda,建議可以重新登入 - 確認`which conda`路徑正確 ``` ~/miniconda3/condabin/conda ```  3. 按照[pb-human-wgs-workflow-snakemake](https://github.com/PacificBiosciences/pb-human-wgs-workflow-snakemake/blob/b99476aa506343f005b3a7bcd0b870fddb138109/Tutorial.md)的教學創建conda環境 ``` # create conda environment conda install mamba -n base -c conda-forge conda activate base mamba create -c conda-forge -c bioconda -n pb-human-wgs snakemake=6.15.3 tabulate=0.8.10 pysam=0.16.0.1 python=3 ``` ``` Downloading and Extracting Packages Preparing transaction: done Verifying transaction: done Executing transaction: done ``` 確認下載流程正確沒問題 >建議可以重新登入國網 4. 確認conda環境路徑正確 用`conda info --envs`檢查,package名稱為`pb-human-wgs`,路徑在`/home/blossom2023/miniconda3/envs/pb-human-wgs`  5. 確認資料夾結構正確 其中`reference`跟`resources`這兩個是額外的資料,需要跟PacBio pb-human-wgs開發者聯繫索取檔案。 >Contributors: >* Juniper Lake ([@juniper-lake](https://github.com/juniper-lake)) >* William Rowell ([@williamrowell](https://github.com/williamrowell)) >* Aaron Wenger ([@amwenger](https://github.com/amwenger)) ``` <directory_name> ├── cluster_logs ├── cohorts # created during process_cohort ├── reference │ └── annotation ├── resources │ ├── decode │ ├── eee │ ├── gnomad │ ├── gnomadsv │ ├── hpo │ ├── hprc │ ├── jellyfish │ ├── slivar │ └── tandem-genotypes ├── samples # created during process_smrtcells ├── smrtcells │ ├── done │ └── ready └── workflow ├── rules │ └── envs └── scripts ├── calN50 └── svpack ``` ### Run process_smrtcell Analysis 1. 執行分析Run Analysis 每次執行分析前都需要conda activate環境,在submit job sciprt ``` # activate conda environment # do this every time you want to run any part of workflow conda activate pb-human-wgs ``` 2. 確認conda環境下的套件建置完成 **需要先在local 端跑一次script,才往下做submit job的動作** 目的是先建立所需的環境,這裡差別是多加`--conda-create-envs-only`,此做法是參考([#issue#142](https://github.com/PacificBiosciences/pb-human-wgs-workflow-snakemake/issues/142))在自己的登錄節點下運行conda下的環境套件 ``` umask 002 source workflow/variables.env snakemake \ --profile workflow/profiles/slurm \ --snakefile workflow/process_smrtcells.smk \ --conda-create-envs-only ``` 確認環境建置沒問題! 3.Job script只需要將`singularity` load進來,新增`ml load libs/singularity/3.10.2` 至`workflow/process_smrtcells.slurm.sh` **Job Script參考** ```= #!/bin/bash #SBATCH -A ENT****** # Account name/project number #SBATCH -J pb-human-wgs-workflow-snakemake # Job name #SBATCH -p ct56 # Partiotion name #SBATCH -N 1 #SBATCH -n 1 #SBATCH -o cluster_logs/slurm-%x-%j-%N.out ml load libs/singularity/3.10.2 # USAGE: sbatch workflow/process_smrtcells.slurm.sh # set umask to avoid locking each other out of directories umask 002 # get variables from workflow/variables.env source workflow/variables.env # execute snakemake snakemake \ --profile workflow/profiles/slurm \ --snakefile workflow/process_smrtcells.smk \ ``` 4. Job submit `sbatch workflow/process_smrtcells.slurm.sh` 正常執行prcess_smrtcell.smk的log file 可以看到流程內執行的每一個job name,使用多少threads 最後會有(100%)done,並輸出這次完整的log資訊  (...略..)  在`samples`資料夾內會自動生成`<sample_id>`以及對應工具的output file  --- ### Run process_sample Analysis 1. 新增`ml load libs/singularity/3.10.2` 至`workflow/process_sample.slurm.sh` **Job Script參考** ```= #!/bin/bash #SBATCH -A ENT****** # Account name/project number #SBATCH -p ct56 # Partiotion name #SBATCH -N 1 #SBATCH -n 1 #SBATCH -o cluster_logs/slurm-%x-%j-%N.out ml load libs/singularity/3.10.2 # USAGE: sbatch workflow/process_sample.slurm.sh <sample_id> SAMPLE=$1 # set umask to avoid locking each other out of directories umask 002 ... [以下略] ``` 2. 刪除在`workflow/process_sample.smk`第18行的括號 ')' > Delete ')' in line 18 at workflow/process_sample.smk  3. 照著github作者在此issue的回覆下載deepvariant的singularity imaging檔`.sif` 並更新.smk內container的路徑資訊[Specifying directroy for singularity downloads](https://github.com/PacificBiosciences/pb-human-wgs-workflow-snakemake/issues/142#issuecomment-1334528153) > deepvariant版本在workflow已經更新到v1.5.0,請用v1.5.0下載`.sif`檔。查看`workflow/config.yaml`確認workflow使用deepvariant的版本 4. 將每個sample在local端建立所需套件的環境 (每個樣品`$SAMPLE`執行一次) > 以下`$SAMPLE`請改為樣品名稱,同`smrtcells/ready/` 樣品資料夾名稱 ``` ml load libs/singularity/3.10.2 snakemake \ --nolock \ --config "sample='$SAMPLE'" \ --profile workflow/profiles/slurm \ --snakefile workflow/process_sample.smk \ --conda-create-envs-only ```   (以上示例為process_sample.smk的環境建置截圖) 4. Job submit `sbatch workflow/process_sample.slurm.sh $SAMPLE` > `$SAMPLE`請改為樣品名稱,每個樣品單獨submit 5. 檢查slurm任務執行狀態與結果 `squeue -j [job id] #查詢已送出的job狀態PD(Pending)或R(Running)` `tail -f cluster_logs/slurm-process_sample.slurm.sh-[job id-NodeID].out #查看slurm job最新執行進度` 正常執行prcess_sample.smk流程,可以看到以下資料夾結構,output file會放置不同工具的資料夾內  ### Run process_cohort Analysis 1. 複製`example_cohort.yaml`並改名稱為`cohort.yaml` `cp workflow/example_cohort.yaml cohort.yaml` 2. 修改`cohort.yaml`cohort資訊並紀錄`[cohort_id]` 3. 照著github作者在此issue的回覆下載glnexus的singularity imaging檔`.sif` 並更新.smk內container的路徑資訊[Specifying directroy for singularity downloads](https://github.com/PacificBiosciences/pb-human-wgs-workflow-snakemake/issues/142#issuecomment-1334528153) > glnexus版本在workflow已經更新到v1.4.1,請用v1.4.1下載`.sif`檔。查看`workflow/config.yaml`確認workflow使用glnexus的版本 4. 將每個cohort在local端建立所需套件的環境 (每個`$COHORT`執行一次) > 以下`$COHORT`請改為樣品名稱,同`cohort.yaml` 內的`[cohort_id]` ``` ml load libs/singularity/3.10.2 snakemake \ --config "cohort='$COHORT'" \ --nolock \ --profile workflow/profiles/slurm \ --snakefile workflow/process_cohort.smk \ --conda-create-envs-only ``` 4. Job submit `sbatch workflow/process_cohort.slurm.sh $COHORT` > `$COHORT`請改為cohort_id,每個cohort單獨submit 5. 檢查slurm任務執行狀態與結果 `squeue -j [job id] #查詢已送出的job狀態PD(Pending)或R(Running)` `tail -f cluster_logs/slurm-process_cohort.slurm.sh-[job id-NodeID].out #查看slurm job最新執行進度` ### 常見問題/注意事項 1. 在執行`process_cohort`流程時,若是執行Trio家系樣本,需檢查smrtcells/ready目錄下都有input sample folder 2. 台灣杉三號生醫專用節點 vs. 台灣杉三號一般節點 - 生醫專用節點: 不能用來跑snakemake的workflow,因為送出工作任務時候必須設定一個固定的核心數量執行運算任務。例: ngs372 partition用來跑workflow時,snakemake裡面的snakejob全部只能用core=56跑,但PacBio原廠提供的snakefile`.smk`已經幫每道snakejob設定了不同核心數,所以送出workflow任務給ngs372執行會卡在 `QOSMinCpuNotSatisfied`錯誤訊息。 - 一般節點: 目前國網只有ct56節點可以依照運算任務的需求提供1-56核心數量範圍去執行任務。因此跑workflow不會受限於固定一個核心數量執行不同的snakejob,能順利完成workflow。 4. 注意儲存空間是否充足 5. 注意在執行`process_smrtcells`,`process_sample`,`process_cohort`前,需確認是否已在local端creat conda environment 6. 因為pb-human-wgs分析流程還在持續開發與更新,有任何問題可以在[Issue page](https://github.com/PacificBiosciences/pb-human-wgs-workflow-snakemake/issues)提問,或是先試著尋找是否有類似的問題,已經被解決了 ### 解決Downloading and installing remote packages 問題  > "運算節點"無法連外網,所以會產生`CreateCondaEnvironmentException` > 無法使用`conda activate /opt/ohpc/Taiwania3/pkg/biology/PacificBiosciences/pb-human-wgs-workflow-snakemake/pb-human-wgs/` > "登入節點"可連外網,下載更新工具
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up