Try   HackMD
tags: unix PBS qstat grep unix qsub for qdel qalter qselect

Portable Batch System (PBS) commands

qsub

Submit one job

## Example 1
qsub -l select=1:ncpus=1:mem=8gb -l walltime=10:0:0 ${locTest}/ADHD2017.1.PRS.sh
#2675742.hpcpbs02

qstat -u lunC

# hpcpbs02:
#                                                            Req'd  Req'd   Elap
# Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
# --------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
# 2675742.hpcpbs0 lunC     short    ADHD2017.1  52464   1   1    8gb 10:00 R 00:00

## Example 2
qsub -l select=1:ncpus=1:mem=8gb -l walltime=8:0:0 ADHD2017.S1.summingPRS.sh
# > qsub: "-lresource=" cannot be used with "select" or "place", resource is: ncpus

## Example 3. Don't put white spaces between resource list options; otherwise 'qsub: illegal -l value'
qsub -l ncpus=2,mem=8gb,walltime=24:00:00 ADHD2017.S1.summingPRS.sh
# 2697535.hpcpbs02

Submit multiple similar jobs with 1 qsub script

# Create the single job script. This script simply prints the value of a qsub -v variable, which takes the value of a bash variable
PS1="$ "
jobScript=/mnt/lustre/working/lab_nickm/lunC/PRS_UKB_201711/GWASSummaryStatistics/scripts/test/qsub_test.sh
$ cat $jobScript
#!/bin/bash
#PBS -l ncpus=1
#PBS -l mem=50gb
#PBS -l walltime=1:00:00
#PBS -m abe

# Assign qsub -v variables to new variables
qsub_v_var1_new=$qsub_v_var1 
echo "qsub_v_var1= $qsub_v_var1"; # you can use the qsub -v variable
echo "qsub_v_var1_new= $qsub_v_var1_new"; # or use the new variable

# Submit 10 jobs using the job script file in a for loop
$ for i in `seq -w 10`; do 
    echo $i; 
    qsub -v qsub_v_var1=$i $jobScript; 
done;

# Check the output of the 10 submitted jobs
cat qsub_test.sh.o*

Obtain an interactive session on the HPC cluster

# step1: PuTTy left panel: SSH> X11> check the box to enable X11 
# step2: PuTTy left panel: Session > type host > Open
# step3: run qsub as
qsub -X -I -q training -l walltime=8:00:00 -l ncpus=1 -l mem=1G
# step4: [user@host] should turn green

use of qsub with a for loop

# Example 1 for i in ADHD2017 ASD2015; do for j in {1..22}; do qsub -l select=1:ncpus=1:mem=8gb -l walltime=10:0:0 -N "${i}.${j}.PRS" ${i}.${j}.PRS.sh; done; done; # Example 2 for jobnum in {1..100};do qsub -l ncpus=1 -l mem=1gb -l walltime=5:00 test.pbs; done;

qstat

Find out which node is used for a particular job. Node 34, 33 are slow ones on QIMR HPC. Consider deleting submitted jobs if they are sent to these two nodes.

# the job number is 8587001
qstat -fx 8587001|grep exec_host
    exec_host = hpcnode034/4*8

If you have a bunch of jobs that are similar, please run only one of them, and ask for much more memory than you expect it will need. Please select the one with the largest data set so we know none of the others will use more than it. Then, once the job is complete, you can run qstat -fx [JobID], where [JobID] is the job number that you received back when you ran the qsub. From that, you'll see how much memory the job consumed and be able to submit the rest with a similar resource request.Estimate how many CPUs, memory and walltime to request. Resources for running submitted jobs should be less than the requested

# suppose job 2420334 is finished # Check resources used in this job qstat -fx 2420334 > /working/lab_nickm/lunC/checking/qstat_job2420334.txt # compare requested resources and used resources qstat -fx 2420334 | grep -E 'resources_used| Resource_List' resources_used.cpupercent = 99 resources_used.cput = 00:44:17 resources_used.mem = 401768kb resources_used.ncpus = 2 resources_used.vmem = 132439816kb resources_used.walltime = 00:44:31 Resource_List.mem = 8gb Resource_List.ncpus = 2 Resource_List.nodect = 1 Resource_List.place = pack Resource_List.select = 1:mem=8gb:ncpus=2 Resource_List.walltime = 24:00:00 # Get info from a bunch of jobs for job in {3808891..3808892};do qstat -fx $job | grep -E 'Job Id:|Job_Name =|resources_used.walltime' >> $locLDOutput/qstat_LDBasedclumping; done

qalter

Change walltime of submitted jobs that are on the queue. Only HPC admin is able to change the requested time for jobs that are running. Email Scott Wood for the change.

# Copy Job_name "plink2_alpha_bgen_conti_binary_prelim.sh"
qstat -f 4826851.hpcpbs02
# Alter walltime of submitted jobs
qalter -l walltime=24:00:00 $(qselect -N plink2_alpha_bgen_conti_binary_prelim.sh)

qdel

Delete submitted jobs

# Suppose job IDs are 3735046-3735127
for i in {3735046..3735127};do
qdel $i;
done