# PacBio Revio Long Read Processing
###### tags: `Pacbio Data`
From chatting with Joey Prinz and PacBio Recommendations linked [here](https://www.pacb.com/wp-content/uploads/Application-note-Consolidated-analysis-tools-with-the-PacBio-WGS-Variant-Pipeline.pdf):
### 1. Upload fastq files using sftp
1. Log into your cluster using ssh
2. Type in `sftp username@dnaseq2.igsp.duke.edu`
* `sftp` stands for "secure file transfer protocol" and the username (the text before the @ site will be different each time)
3. You will be prompted for the password. Enter the password provided
4.
5. Set up the local directory where you want to download these files into
` lcd /datacommons/noor/sbm34/pacbiorevio/fastx_files`
6. Navigate through sftp folders until you reach the directory that contains all fastq files
7. Once you are **INSIDE** the directory containing all fastq files you want, there are two ways to download the files:
* Download all files within the project folder using `mget *`
* Download the folder AND everything within that folder by using **recursive** command `get -r Parent_folder`
8. To return to your main terminal prompt (leave sftp) type 'exit' and hit enter.
### 2. Install pbbioconda and pbmm2:
module load Anaconda3/2024.02
conda create -n pbmm2_env python=3.9 -y
conda activate pbmm2_env
### 3. Input reference genome and index using pbmm2 and index using samtools
`pbmm2 index ....`
`samtools faidxn dmel-all-chromosome-r6.57.fasta`
### 4. pbmm2 for alignment to reference genome:
```
#!/bin/bash
#SBATCH -e errs/%a_pbmm2-%A.err
#SBATCH -o outs/%a_pbmm2-%A.out
#SBATCH --mem-per-cpu=20G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=4
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
pbmm2 align --log-level INFO ../ref/dmel-all-chromosome-r6.57.fasta ../fastx_files/L${LL}.hifi_reads.fastq.gz L${LL}aligned.bam --sort --rg '@RG\tID:L${LL}'
```
### 5. Assess alignment quality using samtools
```
module load samtools/1.14-rhel8
samtools flagstats filename.bam
samtools coverage filename.bam
```
### 6. filter just for chromosome 2 and create new bam.bai file for filtered .bam files
```
#!/bin/bash
#SBATCH -e errs/%a_pbmm2-%A.err
#SBATCH -o outs/%a_pbmm2-%A.out
#SBATCH --mem-per-cpu=20G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=4
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
module load samtools/1.14-rhel8
samtools view -b -o L${LL}_2L.bam L${LL}aligned.bam 2L
samtools view -b -o L${LL}_2R.bam L${LL}aligned.bam 2R
samtools merge L${LL}_chr2.bam L${LL}_2R.bam L${LL}_2L.bam
samtools index L${LL}_chr2.bam L${LL}_chr2.bam.bai
```
## 7. Call variants using:
### Deep Variant (for snps and small variants)
*bin version from the [DeepVariant GitHub](https://github.com/google/deepvariant)
#Pull the docker Deep Variant image
```
BIN_VERSION="1.6.1"
singularity pull docker://google/deepvariant:"${BIN_VERSION}"
```
#Run Deep Variant
```
#!/bin/bash
#SBATCH -e errs/%a_deepvariant-%A.err
#SBATCH -o outs/%a_deepvariant-%A.out
#SBATCH --mem-per-cpu=100G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=16
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
singularity run -B /usr/lib/locale/:/usr/lib/locale/ -B /datacommons/noor/sbm34/:/datacommons/noor/sbm34/ -B /work/sbm34/:/work/sbm34/ \
deepvariant_1.6.1.sif \
/opt/deepvariant/bin/run_deepvariant \
--model_type=PACBIO \
--ref=/datacommons/noor/sbm34/pacbiorevio/ref/dmel-all-chromosome-r6.57.fasta \
--reads=/datacommons/noor/sbm34/pacbiorevio/bam/L${LL}_chr2.bam \
--output_vcf=/datacommons/noor/sbm34/pacbiorevio/vcf/L${LL}_dv_chr2.vcf.gz \
--output_gvcf=/datacommons/noor/sbm34/pacbiorevio/vcf/L${LL}_dv_chr2.g.vcf.gz \
--intermediate_results_dir /work/sbm34 \
--num_shards=16 \
```
That worked and now it doesn't - Tom Milledge says that likely the problem is that the /datacommons volumes are no longer mounted on the compute nodes. Try copying the job files to /work and running it from there. Here's the new code that works from /work (runs from /datacommons don't work anymore as of Jan/09/2025)
```
#!/bin/bash
#SBATCH -e errs/%a_deepvariant-%A.err
#SBATCH -o outs/%a_deepvariant-%A.out
#SBATCH --mem-per-cpu=100G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=16
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
singularity run -B /usr/lib/locale/:/usr/lib/locale/ -B /work/sbm34/:/work/sbm34/ \
deepvariant_1.6.1.sif \
/opt/deepvariant/bin/run_deepvariant \
--model_type=PACBIO \
--ref=/work/sbm34/pacbio/ref/dmel-all-chromosome-r6.57.fasta \
--reads=/work/sbm34/pacbio/bam/L${LL}_chr2.bam \
--output_vcf=/work/sbm34/pacbio/vcf/L${LL}_dv_chr2.vcf.gz \
--output_gvcf=/work/sbm34/pacbio/vcf/L${LL}_dv_chr2.g.vcf.gz \
--intermediate_results_dir /work/sbm34 \
--num_shards=16 \
```
### 8. pbsv (structural variants)
#### Create pbsv environment and add pbsv to it
```
conda install pbsv
conda create -n pbsv pbsv
```
#### Create a .bed file of tandem repeats for the dmel reference genome
##### a. Download the correct version of TRF (I did trf409.macosx) from: https://github.com/Benson-Genomics-Lab/TRF/releases/tag/v4.09.1
##### b. Download reference .fasta file from flybase (same verions as you used to align!). Unzip it.
##### c. Navigate to the folder these are both in and run:
```
chmod +x trf409.macosx
./trf409.macosx dmel-all-chromosome-r6.57.fasta 2 7 7 80 10 50 500 -d -h
```
##### d. Use this python script to convert the output .dat file to .bed file
https://github.com/hdashnow/TandemRepeatFinder_scripts/blob/master/TRFdat_to_bed.py
`vim TRFdat_to_bed.py` and copy/ paste the python code from above, make sure python is installed and then run:
`python3 TRFdat_to_bed.py --dat dmel-all-chromosome-r6.57.fasta.2.7.7.80.10.50.500.dat --bed dmel-all-chromosome-r6.57-TRF.bed`
Upload the .bed file to the same folder as your reference genome
#### Run shells script that identifies signatures of structural variation output to svsig.gz file (uses the .bed file you just created in the last step)
```
#!/bin/bash
#SBATCH -e errs/%a_pbsv-%A.err
#SBATCH -o outs/%a_pbsv-%A.out
#SBATCH --mem-per-cpu=100G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=16
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
pbsv discover --tandem-repeats ..ref/dmel-all-chromosome-r6.57-TRF.bed ../bam/L${LL}_chr2.bam ../vcf/L${LL}_chr2.svsig.gz
```
#### Now run this to create VCF files with called structural variants
```
#!/bin/bash
#SBATCH -e errs/%a_pbsv-%A.err
#SBATCH -o outs/%a_pbsv-%A.out
#SBATCH --mem-per-cpu=100G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=16
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
pbsv call --ccs ../ref/dmel-all-chromosome-r6.57.fasta L${LL}_chr2.svsig.gz L${LL}_chr2_pbsv.vcf
```
### 9. trgt (for tandem repeats)
#### Install trgt
```
conda install trgt
conda create -n trgt trgt
```
#### Reformat the .bed file created by TRF in the previous step so that it is compatible with trgt (use the following R code)
```
library(dplyr)
setwd("/Users/sarah/Downloads")
bed_og <- read.delim("dmel-all-chromosome-r6.57-TRF.bed",header = FALSE, sep = '\t')
bed <- bed_og %>% mutate(V5 = row_number())
bed <- bed %>% mutate(V6 = paste0("ID=TR", bed$V5))
bed <- bed %>% mutate(V7 = paste0("MOTIFS=", bed$V4))
bed <- bed %>% mutate (V8 = paste0("STRUC=(", bed$V4, ")n" ))
bed <- bed %>% mutate (V9 = paste0(bed$V6, ";", bed$V7, ";", bed$V8))
bed_trgt <- bed %>% select(V1, V2, V3, V9)
write.table(bed_trgt, file = "dmel-all-chromosome-r6.57-TRF-trgt.bed", row.names = FALSE, col.names = FALSE, quote = FALSE, sep="\t")
```
#### Run the following script in the trgt environment (this will also utilize the .bed file created by TRF in the previous step)
```
#!/bin/bash
#SBATCH -e errs/%a_trgt-%A.err
#SBATCH -o outs/%a_trgt-%A.out
#SBATCH --mem-per-cpu=100G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=16
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 256,271,282,301,302,303
LL=${SLURM_ARRAY_TASK_ID}
trgt genotype --flank-len 200 --genome ../ref/dmel-all-chromosome-r6.57.fasta \
--repeats ../ref/dmel-all-chromosome-r6.57-TRF-trgt.bed \
--reads ../bam/L${LL}_chr2.bam \
--output-prefix ../vcf/L${LL}_chr2_trgt
```
## 10. Phasing and haplotype calling use hiphase.
### First need to index vcf files using tabix
```
module load htslib/1.11
bgzip XXX.vcf
tabix -p vcf XXX.vcf.gz
```
```
for FILE in *_chr2_trgt.vcf.gz; do
SORTED_FILE="${FILE%.vcf.gz}.sorted.vcf.gz"
bcftools view "$FILE" | vcf-sort | bgzip > "$SORTED_FILE"
tabix -p vcf "$SORTED_FILE"
done
```
### a. create a brand new conda environment and install latest HiPhase
`conda create -n hiphase -c bioconda hiphase`
`conda activate hiphase`
### b. write and run shell script
```
#!/bin/bash
#SBATCH -e errs/%a_hiphase-%A.err
#SBATCH -o outs/%a_hiphase-%A.out
#SBATCH --mem-per-cpu=30G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 30,83,72
LL=${SLURM_ARRAY_TASK_ID}
hiphase --blocks-file ../phased/L${LL}_hiphase_blocks.txt \
--summary-file ../phased/L${LL}_chr_summary.txt --ignore-read-groups\
--reference ../ref/dmel-all-chromosome-r6.57.fasta \
--vcf ../vcf/L${LL}_dv_chr2.vcf.gz \
--output-vcf ../phased/L${LL}_dv_phased_chr2.vcf.gz \
--vcf ../vcf/L${LL}_chr2_pbsv.vcf.gz \
--output-vcf ../phased/L${LL}_pbsv_phased_chr2.vcf \
--vcf ../vcf/L${LL}_chr2_trgt.sorted.vcf.gz \
--output-vcf ../phased/L${LL}_trgt_phased_chr2.vcf \
--bam ../bam/L${LL}_chr2.bam \
--output-bam ../phased/L${LL}_chr2.haplotagged.bam \
```
### 11. Concatenate all three phased VCFs into one VCF per sample
```
#!/bin/bash
#SBATCH -e errs/%a_phasedvcfmerge-%A.err
#SBATCH -o outs/%a_phasedvcfmerge-%A.out
#SBATCH --mem-per-cpu=20G
#SBATCH -p scavenger
#SBATCH --cpus-per-task=4
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 59,96,139,95,173,10,212,288,297,75,158,207,30,83,72
LL=${SLURM_ARRAY_TASK_ID}
module load htslib/1.11
module load bcftools/1.4
module load VCFtools/0.1.17
# Sort, compress, and index the TRGT VCF file
bcftools view ../phased/L${LL}_trgt_phased_chr2.vcf | vcf-sort | bgzip > ../phased/L${LL}_trgt_phased_chr2_sorted.vcf.gz && \
tabix -p vcf ../phased/L${LL}_trgt_phased_chr2_sorted.vcf.gz
# Sort, compress, and index the PBSV VCF file
bcftools view ../phased/L${LL}_pbsv_phased_chr2.vcf | vcf-sort | bgzip > ../phased/L${LL}_pbsv_phased_chr2_sorted.vcf.gz && \
tabix -p vcf ../phased/L${LL}_pbsv_phased_chr2_sorted.vcf.gz
bcftools concat --allow-overlaps \
../phased/L${LL}_dv_phased_chr2.vcf.gz \
../phased/L${LL}_pbsv_phased_chr2_sorted.vcf.gz \
../phased/L${LL}_trgt_phased_chr2_sorted.vcf.gz \
-Oz -o ../phased/L${LL}_phased_merged_chr2.vcf.gz
tabix -p vcf ../phased/L${LL}_phased_merged_chr2.vcf.gz
```
### 12. Annonate phased VCF using SNPEff
```
#!/bin/bash
#SBATCH -e errs/%a_SnpEff_chr2-%A.err
#SBATCH -o outs/%a_SnpEff_chr2-%A.out
#SBATCH --mem-per-cpu=20G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
#SBATCH -a 30,34,45,59,73
LL=${SLURM_ARRAY_TASK_ID}
module load Java/23.0.1
java -Xmx8g -jar /work/sbm34/pacbio/snpEff/snpEff.jar \
-c /work/sbm34/pacbio/snpEff/snpEff.config Drosophila_melanogaster \
-o gatk \
/work/sbm34/pacbio/phased/L${LL}_phased_merged_chr2.vcf > /work/sbm34/pacbio/phased/L${LL}_phased_merged_chr2.ann.vcf
```
#### 13. labell samples correclty adn merge allelic sets together
##### a) Label samples
```
module load bcftools/1.4
bcftools reheader -s <(echo "UnnamedSample L${LL}") -o L${LL}_phased_merged_label_chr2.ann.vcf.gz L${LL}_phased_merged_chr2.ann.vcf.gz
tabix -p vcf L${LL}_phased_merged_label_chr2.ann.vcf.gz
```
##### b) merge allelic sets
`bcftools merge -o merged_BSC307.ann.vcf.gz -O z L83_phased_merged_label_chr2.ann.vcf.gz L49_phased_merged_label_chr2.ann.vcf.gz L34_phased_merged_label_chr2.ann.vcf.gz L30_phased_merged_label_chr2.ann.vcf.gz L284_phased_merged_label_chr2.ann.vcf.gz L257_phased_merged_label_chr2.ann.vcf.gz L170_phased_merged_label_chr2.ann.vcf.gz L139_phased_merged_label_chr2.ann.vcf.gz L121_phased_merged_label_chr2.ann.vcf.gz L114_phased_merged_label_chr2.ann.vcf.gz`
`tabix -p vcf merged_BSC307.ann.vcf.gz`
### 13. Filter phased data to look for genetic variants
## BSC630, NippedA
#### a. Filter phased data to region of interest adn high impact variants
```
#!/bin/bash
#SBATCH -e errs/%a_nippedA_filter-%A.err
#SBATCH -o outs/%a_nippedA_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2R:5177965-5251022 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH")' \
-o ../phased/merged_NippedA_filtered.vcf \
-O v \
../phased/merged_NippedA.vcf.gz
```
#### b.Now, pick a line that didn't map to that region and remove all variants from teh NippedA.vcf files that also appear in the line that doens't have a lethal mutation in NippedA
` bcftools isec -C -o UniqueNippedA_variants.vcf -O z merged_NippedA_filtered.vcf.gz L224_phased_merged_chr2.vcf.gz`
bcftools isec -C merged_NippedA_filtered.vcf L224_phased_merged_chr2.vcf.gz \
-o UniqueNippedA_variants.vcf.gz -O z --output-type z --write-all
## BSC307
#### a. Filter phased data to region of interest adn high impact variants
```
#!/bin/bash
#SBATCH -e errs/%a_BSC307_filter-%A.err
#SBATCH -o outs/%a_BSC307_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2R:13801956-13964325 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/merged_BSC307_filtered_high_moderate.ann.vcf.gz \
-O v \
../phased/merged_BSC307.ann.vcf.gz
```
`bgzip merged_BSC307_filtered.ann.vcf`
`tabix -p vcf merged_BSC307_filtered.ann.vcf.gz`
#### b.Now, pick a line that didn't map to that region and remove all variants from teh NippedA.vcf files that also appear in the line that doens't have a lethal mutation in NippedA
` bcftools isec -C -o UniqueBSC307_variants_high_mod.vcf -O z merged_BSC307_filtered_high_moderate.ann.vcf.gz merged_NippedA.vcf.gz`
#### need to look at some lines specifically
##### L284 - mapped to overlap between exel7128 adn BSC383
bcftools view -r 2R:13839479-13897828 -o L284_mapOL_chr2.ann.vcf -O v L284_phased_merged_chr2.ann.vcf.gz
bcftools isec -C -o UniqueL284_mapOL_variants.vcf -O z L284_mapOL_chr2.ann.vcf.gz merged_NippedA.vcf.gz
##### L30 - mapped to overlap between exel7128 adn BSC383 (and shot)
bcftools view -r 2R:13839479-13897828 -o L30_mapOL_chr2.ann.vcf -O v L30_phased_merged_chr2.ann.vcf.gz
## r10
#### a. Filter phased data to region of interest adn high impact variants
```
#!/bin/bash
#SBATCH -e errs/%a_r10_filter-%A.err
#SBATCH -o outs/%a_r10_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2L:16165831-16190540 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/merged_r10_filtered_high_moderate.ann.vcf.gz \
-O v \
../phased/merged_r10.ann.vcf.gz
```
## ed2308
singleton L208 mapped to 1psc - posterior sex combs
a) annotate with SNPeff, label L208, make index
b) filter
```
#!/bin/bash
#SBATCH -e errs/%a_1psc_L208_filter-%A.err
#SBATCH -o outs/%a_1psc_L208_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2R:12965997-12980994 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/L208_1psc_filter_high_mod_chr2.ann.vcf \
-O v \
../phased/L208_phased_merged_label_chr2.ann.vcf.gz
```
## ed 441
L127 and L274 mapped to Fgop2
```
#!/bin/bash
#SBATCH -e errs/%a_Fgop2_filter-%A.err
#SBATCH -o outs/%a_Fgop2_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2L:6943178-6945289 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/L127_Fgop2_filter_high_mod_chr2.ann.vcf \
-O v \
../phased/L127_phased_merged_label_chr2.ann.vcf.gz
```
## ed385
L224 mapped to Krh1 (and so did L167 but we didnt' sequence it)
```
#!/bin/bash
#SBATCH -e errs/%a_Krh1_filter-%A.err
#SBATCH -o outs/%a_Krh1_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2L:6082603-6096498 \
--include '(FILTER="PASS" || FILTER=".") && (GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/L224_Krh1_filter_high_mod_chr2.ann.vcf \
-O v \
../phased/L224_phased_merged_label_chr2.ann.vcf.gz
```
## ED1725
L96, L59, L256, L19
```
#!/bin/bash
#SBATCH -e errs/%a_drosha_filter-%A.err
#SBATCH -o outs/%a_drosha_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2R:7932429-7936840 \
--include '(GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/merged_drosha_filter_high_mod_chr2.ann.vcf \
-O v \
../phased/merged_ED1725.ann.vcf.gz
```
## ED2247
L207, L158, L75 allelic but havent' mapped to single gene
```
#!/bin/bash
#SBATCH -e errs/%a_ed2247_filter-%A.err
#SBATCH -o outs/%a_ed2247_filter-%A.out
#SBATCH --mem-per-cpu=10G
#SBATCH -p scavenger
#SBATCH --mail-type=ALL --mail-user=sbm34@duke.edu
module load bcftools/1.4
bcftools view \
--regions 2R:11600106-11988720 \
--include '(GT!="0/0" && GT!="1/1" && GT!="2/2") && (INFO/EFF ~ "HIGH" || INFO/EFF ~ "MODERATE")' \
-o ../phased/merged_ed2247_filter_high_mod_chr2.ann.vcf \
-O v \
../phased/merged_ed2247.ann.vcf.gz
```
merged_ed2247_filter_high_mod_chr2.ann.vcf
bcftools isec -C -o UniqueeED2247_filtered_variants.vcf -O z merged_ed2247_filter_high_mod_chr2.ann.vcf merged_NippedA.vcf.gz