scripts in my Github
Species Name: yellow cardinal
Local species name: Cardenal amarillo
Scientific name: Gubernatrix cristata
Class: Aves
Order: Passeriformes
Family: Thraupidae
Genus: Gubernatrix
Conservation status: Endangered
Genome assembly naming (according to Rhie et al. 2021, and corroborated in CGP species list - page 23): bGubCri1.1
-Genome Assemblies of close species:
-Karyotype:
Link to paper with karyotype of the species: here
The karyotype of this species contains 78 chromosomes: 12 pairs of macrochromosomes + 27 microchromosomes.
Gubernatrix cristata 2n=78
Sample ID: 5697
Sample type: blood in 96% ethanol
Sex: female
Development stage: adult
Collectors: Dr. Bettina Mahler and Lic. Melina Atencio in the rescue center Fundacion Temaiken (CRET)
Origin: confiscated, in Rescue Center since 16.10.2019. Assigned to the management unit of the east, where there is no hybridization (based on mtDNA + 10 microsatellites markers).
Location: ADD MAP
Date of collection: 19.04.2021
Date of sample delivery from Argentina to Germany: 19.04.2021
Date of sample arrival to Uni Potsdam: 10.05.2021
Date of DNA extraction: 17.05.2021
Extraction kit for MHW DNA: Circulomics Nanobind CBB Big DNA
Protocol: here but before 2X washes with PBS to remove ethanol:
80 ul of Ethanol/blood mix pipetted in Protein LoBind tubes.
I sent 3 extractions from same individual to be considered for sequencing.
DRESDEN-concept Genome Center (MPI-Dresden)
Contact: Dr. Sylke Winkler (winkler@mpi-cbg.de)
Date of sample delivery to sequencing center: 20.07.2021
Date of sample reception in the sequencing center: 21.07.2021
Template Specimen Form: here
Date of sequencing completion: 20.08.2021
Sequencing machine:
QC in sequencing center (extraction 3 chosen): here
1. bam to fq: bam2fastx using SMRTtool version 1.3.1 (also tested with bedtools 2.9)
submit_bam2fastx_newHPC.slurm
2. Trimming of PacBio SMRT adapters from reads (cutadapt) –> 4.5 % reads removed
submitcutadaptnewHPC.slurm
1. Fastqc on trimmed reads
submit_fastqc_trimmed_reads.slurm
2. Plot reads length
3. Reads statistics
Count reads on .fasta file (N = 2 489 222)
grep ">" /path_to_reads/reads_noAdapt.fasta | wc -l
Count reads on .fastq file (N = 2 489 222)
expr $(cat /path_to_reads/reads_noAdapt.fastq | wc -l) / 4
Using asmstats script:
submit_trimmed_asmstats.slurm
Outfile reads_noAdapt.fastq.stats here.
4. Kmer analysis on reads
With the resulting 21-mer or 31-mer histograms, GenomeScope2 was used to estimate the haploid genome length, repeat content, and heterozygosity:
Assuming k21:
submit_Jellyfish_trimmed_k21.slurm
I used jellyfish version 2.2.10
Assuming k31:
submit_Jellyfish_trimmed_k31.slurm
Open the histo files in genomescope2 website
Kmer: 21
Haploid genome length: 1 057 467 553 bp ~ 1.06 Gb
Heterozygosity: 1.1%
Repts (100-uniq): 11.8%
Parameters for purge_dups:
Link to genomescope2 results here
Kmer: 31
Haploid genome length: 1 058 655 883 bp ~ 1.06 Gb
Heterozygosity: 0.99 %
Repts (100-uniq): 9.9 %
Parameters for purge_dups:
Reads Coverage: 30.3X
(number of bases sequenced -sum value in .stats file-) / (genome size in genomescope2 according to kmer analysis)
(32036044402/1058655883)
The estimated genome size for both kmer sizes is around 1.06 Gb, this is also close to the zebra finch (Taeniopygia guttata, 1.056 Gb), and other birds like small tree finch (Camarynchus parvulus, 1.064 Gb), and medium ground finch (Geospiza fortis, 1.065 Gb).
1. Hifiasm (https://github.com/chhylp123/hifiasm) with different parameters of internal purging and kmer sizes (l0,l1,l2 and nok, k21, k31)
I used Hifiasm version 0.16.1-r375
scripts for assemblies (change parameters -l and -k accordingly)
submit_hifiasm_trimmed_l2_k31_primary.slurm
script for statistics for the asm:
Pri:
submit_asmstats_hifiasm_l2_p_k31.slurm
Primmary assembly:
YelCar_hifiasm_trim_l2.k31.PRI.gfa
Looks good, most of the contigs do not have 2 branches.
Zoom-in of all contigs I found with bubbles:
(hopefully these will be solved after purging - I can not corroborate since purgedups does not provide a .gfa file)
Alternative assembly:
YelCar_hifiasm_trimmed_l2_k31.asm.a_ctg.gfa
It is more fragmented, and much more bubbles.
PURGING
purge_dups was used to reduce redundancy purging false duplications which occur by incorrectly retaining some secondary alleles in the primary contig set.
https://github.com/dfguan/purge_dups
default, and VGP suggested cutoffs were used.
scripts for purging - 1st ROUND (purge PRI c1 to obtain p1)
submit_purge_dups_trimmed_c1.l2.k31_new.slurm
Pri (p1):
Alt (p2):
job was ran with –cpus-per-task=24, –mem=64G. It took 15 min.
scripts for purging - 2nd ROUND (purge ALT c2p2 to obtain q2)
submit_purge_dups_trimmed_c1.l2.k31_default_c2p2_21022022.slurm
Pri:
Alt (q2):
requested –cpus-per-task=24, –mem=64G, and it took 15 min to run
Values based on kmercov from genomescope2:
For k31:
For k21:
2. IPA
https://github.com/PacificBiosciences/pbipa
I used ipa (wrapper) version=1.3.1
It was ran requesting 50G of mem and lasted 2 days to finish.
Pri:
Alt:
It was ran with –cpus-per-task=24, –mem=24G and took 10 min.
https://git.imp.fu-berlin.de/cmazzoni/GEP
Developers: James Sullivan and Dr. Camila Mazzoni.
To evaluate all my hifiasm and IPA asms, before and after purging. Nt=29.
STEP 1: building the meryl kmer database
YelCar_hifi_sampleSheet.tsv
STEP 2: run the genome asm evaluation
runEval_YelCar_hifiasm_IPA.tsv
It took 1 day to ran requestin 80 cores and 5 jobs.
Assembly | hifiasm_l0_nok | purgedups_l0_nok_default | purgedups_l0_nok_VGPcutoffs | hifiasm_l0_k21 | purgedups_l0_k21_default | purgedups_l0_k21_VGPcutoffs | hifiasm_l0_k31 | purgedups_l0_k31_default | purgedups_l0_k31_VGPcutoffs | hifiasm_l1_nok | purgedups_l1_nok_default | purgedups_l1_nok_VGPcutoffs | hifiasm_l1_k21 | purgedups_l1_k21_default | purgedups_l1_k21_VGPcutoffs | hifiasm_l1_k31 | purgedups_l1_k31_default | purgedups_l1_k31_VGPcutoffs | hifiasm_l2_nok | purgedups_l2_nok_default | purgedups_l2_nok_VGPcutoffs | hifiasm_l2_k21 | purgedups_l2_k21_default | purgedups_l2_k21_VGPcutoffs | hifiasm_l2_k31 | purgedups_l2_k31_default | purgedups_l2_k31_VGPcutoffs | IPA | IPA_purged |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
total_num_bases | 2,224,168,269 | 1,351,576,889 | 1,188,367,148 | 2,222,471,140 | 1,346,836,325 | 1,182,863,508 | 2,223,097,156 | 1,355,784,751 | 1,182,862,056 | 1,332,099,590 | 1,176,726,747 | 1,176,647,996 | 1,331,886,907 | 1,180,543,672 | 1,180,465,568 | 1,330,877,578 | 1,179,270,444 | 1,177,972,303 | 1,245,686,462 | 1,165,793,779 | 1,165,793,779 | 1,244,971,205 | 1,172,156,874 | 1,172,156,874 | 1,243,890,104 | 1,164,967,086 | 1,164,967,086 | 1,080,623,443 | 1,070,029,210 |
size_estimate | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 | 1,058,734,261 |
max_heterozygosity | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% | 0.997831% |
GC_Content | 43.4626 | 43.1192 | 43.3178 | 43.4481 | 43.1054 | 43.2881 | 43.4551 | 43.0657 | 43.2764 | 43.9681 | 43.2168 | 43.2159 | 43.9665 | 43.2379 | 43.2376 | 43.9685 | 43.2309 | 43.2214 | 43.7381 | 43.1243 | 43.1243 | 43.7162 | 43.1699 | 43.1699 | 43.7021 | 43.1111 | 43.1111 | 42.4761 | 42.3778 |
total_num_scaffolds | 2,757 | 1,215 | 1,083 | 2,781 | 1,245 | 1,115 | 2,754 | 1,205 | 1,073 | 1,261 | 798 | 794 | 1,311 | 830 | 829 | 1,288 | 793 | 786 | 972 | 616 | 616 | 1,005 | 650 | 650 | 978 | 597 | 597 | 575 | 490 |
total_num_contigs | 2,757 | 1,215 | 1,083 | 2,781 | 1,245 | 1,115 | 2,754 | 1,205 | 1,073 | 1,261 | 798 | 794 | 1,311 | 830 | 829 | 1,288 | 793 | 786 | 972 | 616 | 616 | 1,005 | 650 | 650 | 978 | 597 | 597 | 590 | 501 |
number_of_gaps | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | 11 |
Longest_scaffold | 46,214,770 | 46,214,770 | 46,214,770 | 46,214,784 | 46,214,784 | 46,214,784 | 46,214,776 | 46,214,776 | 46,214,776 | 50,262,241 | 50,262,241 | 50,262,241 | 50,372,431 | 50,372,431 | 50,372,431 | 50,428,791 | 50,428,791 | 50,428,791 | 64,130,911 | 64,130,911 | 64,130,911 | 65,868,658 | 65,868,658 | 65,868,658 | 69,566,365 | 69,566,365 | 69,566,365 | 33,552,962 | 33,552,962 |
Scaffold_N50_length | 2,955,405 | 7,258,968 | 8,379,364 | 2,998,866 | 7,383,908 | 8,477,060 | 2,998,866 | 7,258,958 | 8,379,360 | 10,709,157 | 14,151,982 | 14,151,982 | 11,908,915 | 14,446,381 | 14,446,381 | 12,368,032 | 14,424,108 | 15,226,941 | 17,289,722 | 17,435,227 | 17,435,227 | 17,291,129 | 17,435,228 | 17,435,228 | 17,435,235 | 21,431,074 | 21,431,074 | 8,893,208 | 8,893,208 |
Scaffold_NG50_length | 10,045,395 | 10,045,395 | 10,045,395 | 10,097,027 | 10,072,371 | 9,722,664 | 10,111,749 | 10,111,749 | 10,045,389 | 15,882,063 | 15,882,063 | 15,882,063 | 16,399,956 | 16,399,956 | 16,399,956 | 16,399,925 | 16,399,925 | 16,399,925 | 21,425,933 | 21,425,933 | 21,425,933 | 21,431,078 | 21,431,078 | 21,431,078 | 22,844,882 | 22,844,882 | 22,844,882 | 8,901,910 | 8,901,910 |
Scaffold_N95_length | 232,223 | 362,570 | 374,275 | 228,491 | 340,168 | 317,809 | 231,040 | 364,671 | 364,465 | 312,400 | 596,648 | 596,648 | 276,936 | 542,153 | 542,153 | 299,635 | 557,635 | 559,069 | 445,326 | 891,889 | 891,889 | 407,217 | 733,833 | 733,833 | 436,642 | 879,043 | 879,043 | 533,164 | 660,386 |
Scaffold_NG95_length | 3,485,820 | 2,473,102 | 1,574,909 | 3,496,061 | 2,473,082 | 1,604,414 | 3,554,867 | 2,531,255 | 1,574,891 | 2,730,996 | 2,153,208 | 2,153,208 | 2,698,482 | 2,219,116 | 2,219,116 | 2,698,460 | 2,219,116 | 2,219,116 | 3,399,398 | 2,986,878 | 2,986,878 | 3,320,541 | 3,236,925 | 3,236,925 | 3,386,191 | 3,231,517 | 3,231,517 | 868,503 | 868,503 |
Longest_contig | 46,214,770 | 46,214,770 | 46,214,770 | 46,214,784 | 46,214,784 | 46,214,784 | 46,214,776 | 46,214,776 | 46,214,776 | 50,262,241 | 50,262,241 | 50,262,241 | 50,372,431 | 50,372,431 | 50,372,431 | 50,428,791 | 50,428,791 | 50,428,791 | 64,130,911 | 64,130,911 | 64,130,911 | 65,868,658 | 65,868,658 | 65,868,658 | 69,566,365 | 69,566,365 | 69,566,365 | 33,552,962 | 33,552,962 |
Contig_N50_length | 2,955,405 | 7,258,968 | 8,379,364 | 2,998,866 | 7,383,908 | 8,477,060 | 2,998,866 | 7,258,958 | 8,379,360 | 10,709,157 | 14,151,982 | 14,151,982 | 11,908,915 | 14,446,381 | 14,446,381 | 12,368,032 | 14,424,108 | 15,226,941 | 17,289,722 | 17,435,227 | 17,435,227 | 17,291,129 | 17,435,228 | 17,435,228 | 17,435,235 | 21,431,074 | 21,431,074 | 8,893,208 | 8,893,208 |
Contig_NG50_length | 10,045,395 | 10,045,395 | 10,045,395 | 10,097,027 | 10,072,371 | 9,722,664 | 10,111,749 | 10,111,749 | 10,045,389 | 15,882,063 | 15,882,063 | 15,882,063 | 16,399,956 | 16,399,956 | 16,399,956 | 16,399,925 | 16,399,925 | 16,399,925 | 21,425,933 | 21,425,933 | 21,425,933 | 21,431,078 | 21,431,078 | 21,431,078 | 22,844,882 | 22,844,882 | 22,844,882 | 8,901,910 | 8,901,910 |
Contig_N95_length | 232,223 | 362,570 | 374,275 | 228,491 | 340,168 | 317,809 | 231,040 | 364,671 | 364,465 | 312,400 | 596,648 | 596,648 | 276,936 | 542,153 | 542,153 | 299,635 | 557,635 | 559,069 | 445,326 | 891,889 | 891,889 | 407,217 | 733,833 | 733,833 | 436,642 | 879,043 | 879,043 | 527,794 | 625,785 |
Contig_NG95_length | 3,485,820 | 2,473,102 | 1,574,909 | 3,496,061 | 2,473,082 | 1,604,414 | 3,554,867 | 2,531,255 | 1,574,891 | 2,730,996 | 2,153,208 | 2,153,208 | 2,698,482 | 2,219,116 | 2,219,116 | 2,698,460 | 2,219,116 | 2,219,116 | 3,399,398 | 2,986,878 | 2,986,878 | 3,320,541 | 3,236,925 | 3,236,925 | 3,386,191 | 3,231,517 | 3,231,517 | 856,071 | 856,071 |
qv_score | 61.9987 | 62.0147 | 61.6227 | 61.945 | 62.0304 | 61.624 | 62.0604 | 62.1082 | 61.6803 | 60.8608 | 61.6912 | 61.6909 | 60.8165 | 61.7131 | 61.7129 | 60.8343 | 61.617 | 61.635 | 60.5357 | 61.342 | 61.342 | 60.389 | 61.3572 | 61.3572 | 60.4856 | 61.3088 | 61.3088 | 56.3843 | 57.2844 |
kmer_completeness | 99.3468 | 85.5233 | 82.6667 | 99.3354 | 85.4438 | 82.561 | 99.3428 | 85.6085 | 82.5495 | 84.2954 | 82.507 | 82.5067 | 84.3931 | 82.6174 | 82.6171 | 84.4225 | 82.6217 | 82.6059 | 83.2772 | 82.4928 | 82.4928 | 83.3569 | 82.6469 | 82.6469 | 83.3613 | 82.5778 | 82.5778 | 81.3829 | 81.2586 |
Complete_Buscos | 96.2% | 95.8% | 95.8% | 96.3% | 95.8% | 95.9% | 96.3% | 95.8% | 95.7% | 96.0% | 95.8% | 95.8% | 96.0% | 95.9% | 95.9% | 96.0% | 95.8% | 95.8% | 96.0% | 95.9% | 95.9% | 96.0% | 95.9% | 95.9% | 96.1% | 96.0% | 96.0% | 95.6% | 95.7% |
Complete_single_buscos | 16.8% | 78.6% | 94.8% | 16.9% | 78.7% | 94.7% | 16.8% | 78.4% | 94.7% | 87.1% | 94.7% | 94.7% | 87.5% | 94.8% | 94.8% | 87.2% | 94.8% | 94.8% | 93.6% | 94.9% | 94.9% | 93.7% | 94.9% | 94.9% | 93.8% | 95.0% | 95.0% | 94.6% | 94.7% |
PDF report for each asm evaluated here
Achtung: the K-mer Multiplicity PRI (stacked) plot is wrong - the plot is actually showing the flattened pri+alt. The correct plot is in the merquey folder: asmnamemerqOutput.spectra-cn.st.png
-Merqury copy number spectrum plots (similar to KAT):
tracks the multiplicity of each k-mer found in the reads and colors it by the number of times it is found in a given assembly
->when using hifiasm.l0, the plots for the primmary asm have also a blue peak (denoting kmers appearing twice), which should have not been there since the asm is already phased. For hifiasm.l1 too, but after purging the blue is gone.
->no asms showed a bar at the origin of the plot representing kmers found only in the asm
-> the higher fraction of k-mers missing in the assembly (black peak, read-only) seen in all asm plots probably reflects kmers specific to the other haplotig! This is corroborated when looking at the plot flattened pri+alt, in which the grey peak of read-only kmers disappears.
-kmer completeness: first filter out low-copy k-mers that are likely to represent sequencing errors, and then define k-mer completeness as the fraction of reliable k-mers in the read set that are also found in the assembly.
-qv-score:(Phred quality score, kmer survival rate, or assembly consensus quality value) assumes that kmers found only in the asm are errors. It depends on the coverage, quality of the read set, and kmer size chosen. Higher QVs indicate a more accurate consensus.
FullTable_asms_comparisons.xlsx
Criterion | Best assembly | Best value |
---|---|---|
Contig N50>10Mb | hifiasm.l2.k31.purged (both with default or VGP suggested values) | 21 Mb |
Busco completness>95% | all, but best: hifiasm.l0 (before purging), and hifiasm.l2.k31.purged (both with default or VGP suggested values) | 96% |
Busco duplicated rate<3% | all, except hifiasm.l0 and l1 before purging, and l0 after purging with default values | 1% |
< Number of contigs | IPA, IPApurged, hifiasm.l2.k31.purged (default and | |
> qv-score | hifiasm.l0.k31.purged.default | 62.11 |
> kmer completness | hifiasm.l0.k31 | 99.34 |
copy number spectrum plots | all hifiasm.l1.purged, hifiasm.l2 and all hifiasm.l2.purged | no blue peak, no bar of kmers found only in the asm |
All the QC metrics are better for asm hifiasm.l2.k31.purged except the qv-score and kmer completness.
We estimated a very high base accuracy of QV (consensus quality) 61.3 and 62.8 for both haplotypes, indicating one error per ~71.4 million base pairs (=61.3088x1164967086).
https://github.com/marbl/canu/releases
module purge
module load lang/Anaconda3/2020.11
source activate java11-env
export PATH=/work/dominguez/YelCar_GenomeAssembly_MPI/canu-2.1.1/bin:$PATH
canu -assemble -d /path/hicanu_FatNode/ -p YelCar_hicanu_assemble genomeSize=1.06g -pacbio-hifi /path_ro_reads/reads_noAdapt.fastq useGrid=false
#change genomeSize for what was predicted by genomescope2
#Requested --mem=150G, --cpus-per-task=8, and took xx to finish.
https://github.com/AntonBankevich/LJA
module purge
module load lib/zlib/1.2.11-GCCcore-10.2.0
module load lang/Anaconda3
source activate cmake-env
cd /path/LJA/
/work/dominguez/LJA/bin/lja -o /path/LJA_fatnode_Cintel/ --reads /path_to_reads/reads_noAdapt.fastq --threads 8 --diploid
The job to assemble the hifi reads seems not to use much memory over most of its runtime. It seems that it is only at the end that it performs a huge allocation. And it is killed because of that. So, I then ran it in our "fat" node (where I can request –mem=1500G). In this case, the job failed after 14 days of running due to a time limitation. I, then, tested it with a shorter version of the infile, requesting 24 CPUs (–mem-per-cpu=16G). It took 1h and then also failed in the step of uncompressing homopolymers in contigs. It produced the expected output files except from the final fasta assembly and final gfa.
when fatnode is free, try: submitLJAtrimmedFatNodeintelv2.slurm
https://github.com/ekimb/rust-mdbg
I did not try it given that it does not resolve both haplotypes.
https://github.com/mummer4/mummer
https://mummer4.github.io/manual/manual.html
Compare my best asm with a reference from a related species genome.
There are no VGP genomes in the same family (Thraupidae).
I could use another family, like:
Zebra finch genome VGP(GCF_003957565.2_bTaeGut1.4.pri)…
or a species of the same family (but not so good asm), like **small tree finch **(GCA_902806625.1_Camarhynchus_parvulus_V1.1), and medium ground finch (GCF_000277835.1_GeoFor_1.0).
The genome asm for the small tree finch (Camarhynchus parvulus) is slightly better (genome coverage 31X, less gaps).
all-vs-all comparison of nucleotide sequences contained in multi-Fasta data files. It is best used for similar sequences that may have large rearrangements.
download Camarhynchus parvulus (Campar) asm from ncbi (in terminal):
wget -O Campar_genome1.1.fna.gz ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/902/806/625/GCA_902806625.1_Camarhynchus_parvulus_V1.1/GCA_902806625.1_Camarhynchus_parvulus_V1.1_genomic.fna.gz
job script: submit_mummer_Hifiasm.l2.k31.p.PRI_vs_Campar.slurm Ran with –cpus-per-task=24, –mem=64G, and took 3hs.
I used NUCmer (NUCleotide MUMmer) version 3.1
Dotplot between two assemblies (ran delta-filter, show-coords and mummerplot):
submit_mummer_Hifiasm.l2.k31.p.PRI_vs_Campar_2.slurm
Change in the .gp file towards the bottom this lines:
set linestyle 1 lt 1 lw 0.3 pt 6 ps 0.1
set linestyle 2 lt 3 lw 0.3 pt 6 ps 0.1
set linestyle 3 lt 2 lw 0.3 pt 6 ps 0.1
I can do a circos plot with the mummer .coords file.
blastn -db /srv/public/shared/data/blastDB/nt/nt -query /srv/public/users/dominguez/asm/hifiasm.l2.k31.default_p1_purged_default.fa -out /srv/public/users/dominguez/2ndSpikeIn_testAdapters/blast/RefGenome_vs_ncbi/hifiasm.l2.k31.default_p1_purged_NT.xml -max_target_seqs 1 -num_threads 36 -evalue 10e-20 -outfmt 5
From 597 contigs, 27 with no hits.
Th vast majority of hits within Aves, but a very small number of hits with Boreoeutheria (placental mammals, n=2), and Austrfundulus limnaeus (fish, n1).
Alternative to pbccs for generation of ccs reads. It is 20X more computational expensive but reduces reads error, creates more coverage of the same PacBio reads, and increases contig N50 and QC.
Paper here: https://www.biorxiv.org/content/10.1101/2021.08.31.458403v1.full.pdf
github here: https://github.com/google/deepconsensus
DeepVariant_DeepConsensus (kate file):
Check if some jobs in the array failed:
ls -dq *output.fastq | wc -l
#9996
#there are 4 jobs missing
ub=10000
seq "$ub" | while read -r i; do [[ -f "$i.m64046_210816_140027.output.fastq" ]] || echo "$i is missing"; done
#jobs failed that I should re-run:
#710
#1165
#1264
#1463
Re run those manual to retrieve the missing pieces.
simply use the same job-script, comment out the array part, set imax to 10000 and i to the index of the missing chunk.
#concatenate all fastq outputs: DeepConsensus_cat_fastq.sh
cat /work/dominguez/YelCar_GenomeAssembly_MPI/reads/DeepConsensus/YelCarReads_chunks/results/*.output.fastq > /work/dominguez/YelCar_GenomeAssembly_MPI/reads/DeepConsensus/YelCarReads_chunks/results/total_ccs.fastq
#total.fastq is 75 456 Gb and the old ccs.fasta was 126 Gb.
Re-run whole pipelne with this new ccs file
1a. TRIMMING. Removing adapter contaminated reads:
submit_cutadapt_new_ccs_DC.sh
cutadapt -j 10 -o /path/new_ccs_DC_noAdapt.fastq /path/total_ccs.fastq -b ATCTCTCTCAACAACAACAACGGAGGAGGAGGAAAAGAGAGAGAT -b ATCTCTCTCTTTTCCTCCTCCTCCGTTGTTGTTGTTGAGAGAGAT --discard-trimmed
4.4% reads were removed because they contained adapters (cutadapt version 3.5)
BUT I found this paper comparing performance of cutadapt to a new tool to filter reads with adapters (HiFiAdapterFilt). They are similar but the command they used for cutadapt is different, so I try this one. [See bellow]
1b. Reads Statistics (asmstats) before and after trimming
submit_asmstats_new_DC_ccs.sh
#BEFORE TRIMMING
asmstats /path/total_ccs.fastq > /path/total_ccs.fastq.stats
#AFTER TRIMMING
asmstats /path/new_ccs_DC_noAdapt.fastq > /path/new_ccs_DC_noAdapt.fastq.stats
Before trimming: 2 977 781 ccs reads !
After trimming: 2 846 116 ccs reads with NO ADAPTERS!
1c. Fastqc after triming adapters (IGNORE THIS CAUSE IT PERFORMS BETTER FOR SHORT READS - AND I HAVE LONG READS!)
submit_fastqc_new_DC_ccs.sh
link to report in GoogleDrive
fastqc -t 6 /path/new_ccs_DC_noAdapt.fastq -o /path/new_ccs_DC_noAdapt
PROBLEM: Quality scores per base were higher (~70%) before running DeepConsensus than now (~50%) ! Also before quality was decreasing with position in read and now it increases until it reaches a platau.
PROBLEM: Avergae quality per read was ~90 before and not ~50 !
1d. Kmer analysis (jellyfish + genomescope2)
submit_Jellyfish_new_DC_ccs_trimmed_k31.sh
jellyfish count -C -m 31 -s 1000000 -t 20 -o /path/new_ccs_DC_noAdapt.k31.jf /path/new_ccs_DC_noAdapt.fastq
jellyfish histo /path/new_ccs_DC_noAdapt.k31.jf > /path/new_ccs_DC_noAdapt.k31.histo
Haploid genome length: 1 061 015 273 bp ~ 1.06 Gb
Heterozygosity: 0.978 %
Repts (100-uniq): 10 %
Parameters for purge_dups:
Estimated genome haploid size: xx bp
Transition Parameter (kmercov1.5): 15.71.5-> 24 reads (integer)
Maximum read depth (Transition Par3): 243 -> 72 reads (interger)
Reads Coverage: 35x
(number of bases sequenced -sum value in .stats file-) / (genome size in genomescope2 according to kmer analysis)
(36 870 266 213 / 1 061 441 021)
CONCLUSION: Reads coverage improved!
Run again jellyfish histo command increasing parameter -h (because many kmers were grouped in the max default category) because the estimation for genome size can change.
jellyfish count -C -m 31 -s 1000000 -t 20 -o /path/new_ccs_DC_noAdapt.k31_h.jf /path/new_ccs_DC_noAdapt.fastq
jellyfish histo -h 100000000 /path/new_ccs_DC_noAdapt.k31_h.jf > /path/new_ccs_DC_noAdapt.k31_h.histo
Estimated Genome Size increased:
Haploid genome length: 1 101 204 876 bp ~ 1.1 Gb
Heterozygosity: 0.978 %
Repts (100-uniq): 13.2 %
Parameters for purge_dups:
Transition Parameter (kmercov1.5): 15.71.5-> 24 reads (integer)
Maximum read depth (Transition Par3): 243 -> 72 reads (interger)
Reads Coverage: 33.5x
(number of bases sequenced -sum value in .stats file-) / (genome size in genomescope2 according to kmer analysis)
(36 870 266 213 / 1 101 204 876)
CONCLUSION: Estimated haploid genome size and reads coverage both increased a little!
1a. cutadapt new command.
I found this paper comparing performance of cutadapt to a new tool to filter reads with adapters (HiFiAdapterFilt). They are similar but the command they used for cutadapt is different (apart from universal adapter, they trim as well a 2 primer, and they force a 100% overlap), so I try this one.
submit_cutadapt_new_ccs_DC_OtherAdapter.sh
cutadapt -j 10 -o /path/new_ccs_DC_noAdapt_Others.fastq /path/total_ccs.fastq -b AAAAAAAAAAAAAAAAAATTAACGGAGGAGGAGGA;min_overlap = 35 -b ATCTCTCTCTTTTCCTCCTCCTCCGTTGTTGTTGTTGAGAGAGAT;min_overlap = 45 --discard-trimmed --revcomp -e 0.1
Now only 315 reads (0.0%) were removed !!!
To find out which adapters were used in my seq run check subreadset.xml file (it is ok because they are universal, see page 4 here).
1b. Reads statistics with asmstats on new trimmed reads
asmstats /path/new_ccs_DC_noAdapt_Others.fastq > /path/new_ccs_DC_noAdapt_Others.fastq.stats
1c. FASTQC after triming new adapters
THIS IS NOT IMPORTANT AS THIS TOOL IS FOR SHORT READS AND NOT LONG HIFI READS!!!
submit_fastqc_new_DC_ccs_noAdaptOthers.sh
fastqc -t 6 /path/new_ccs_DC_noAdapt_Others.fastq -o /path/FASTQC_new_ccs_DC_noAdaptOthers
Per base quality scores are still worst than before DC.
1d. Kmer analysis (jellyfish + genomescope2)
submit_Jellyfish_new_DC_ccs_trimmed_Others_k31_h.sh
jellyfish count -C -m 31 -s 1000000 -t 20 -o /path/new_ccs_DC_noAdapt_Others.k31_h.jf /path/new_ccs_DC_noAdapt_Others.fastq
jellyfish histo -h 100000000 /path/new_ccs_DC_noAdapt_Others.k31_h.jf > /path/new_ccs_DC_noAdapt_Others.k31_h.histo
Haploid genome length: 1 100 327 273 bp ~ 1.1 Gb
Heterozygosity: 0.977 %
Repts (100-uniq): 13.2 %
Parameters for purgedups:
Transition Parameter (kmercov1.5): 16.41.5-> 25 reads (integer)
Maximum read depth (Transition Par3): 253 -> 75 reads (interger)
Reads Coverage: 35x
(number of bases sequenced -sum value in .stats file-) / (genome size in genomescope2 according to kmer analysis)
(38570683500 / 1 100 327 273)
CONCLUSION: Estimated haploid genome size and reads coverage both increased a little!
1e. Plot reads length
script only works with fasta files.
module load lang/Anaconda3/2020.11
source activate biopython-env
export PYTHONPATH=$PYTHONPATH:/work/dominguez/YelCar_GenomeAssembly_MPI
python -m plot_fasta_length_v3 /path/new_ccs_DC_noAdapt_Others.fasta.gz /path/new_ccs_DC_noAdapt_Others.fastq.Plotlength.png
All with k31, changing purging parameter l0, l1 and l2
submit_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31_primary.sh
hifiasm -t 38 -l0 --primary -o /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm /path/new_ccs_DC_noAdapt_Others.fastq -k 31
# convert gfa to fa
/work/dominguez/YelCar_GenomeAssembly_MPI/gfa2fa /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.p_ctg.gfa > /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.p_ctg.fa
/work/dominguez/YelCar_GenomeAssembly_MPI/gfa2fa /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.a_ctg.gfa > /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.a_ctg.fa
# statistics for primary asm c1
module purge
module load lang/Anaconda3/2020.11
source activate seqtk-env
export PATH=$PATH:/work/dominguez/YelCar_GenomeAssembly_MPI/
asmstats /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.p_ctg.fa > /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.p_ctg.fa.stats
# statistics for asm alternate c2
asmstats /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.a_ctg.fa > /path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.a_ctg.fa.stats
Statistics for primary assembly hifiasm l0:
Statistics for the alternate assembly hifiasm l0:
Statistics for primary assembly hifiasm l1:
Statistics for alternate assembly hifiasm l1:
Statistics for primary assembly hifiasm l2:
827 contigs, contig N50: 22.8 Mb
Statistics for alternate assembly hifiasm l2:
CONCLUSION: Hifiasm primary l2 assmebly improved with DC reads comapred to before (less contigs, greater average read length and N50).
rsync -vz --progress dominguez@n-hpc-login1.rz.uni-potsdam.de:/path/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l2_k31.asm.p_ctg.gfa .
some contigs with bubbles:
Use purge_dups to purge false duplications that occured by incorrectly retaining some secondary alleles in the primary contig set.
1st ROUND (purge PRI p1 to obtain c1):
submit_purge_dups_trimmed_c1.l0.k31_new_ccs_DC_trim_reads.sh
submit_purge_dups_trimmed_c1.l1.k31_new_ccs_DC_trim_reads.sh
submit_purge_dups_trimmed_c1.l2.k31_new_ccs_DC_trim_reads.sh
Hifiasm_l2_purged_VGP: 519 contigs, N50-contigs= 26.4Mb
2nd ROUND (purge ALT c2p2 to obtain q2):
submit_purge_dups_l0.k31_default_c2p2_new_ccs_DC_trim.sh
l1 hap_default.fa.stats
l0 purged_default.fa (q2)
submit_purge_dups_l1.k31_default_c2p2_new_ccs_DC_trim.sh
l1 hap_default.fa.stats
l1 purged_default.fa.stats (q2)
submit_purge_dups_l2.k31_default_c2p2_new_ccs_DC_trim.sh
l2 hap_default.fa.stats
l2 purged_default.fa.stats (q2)
with C. parvulus:
kate file
New: this time using Busco lineage aves_odb10
To evaluate all my hifiasm asms (n=3), before and after purging (n=6).
STEP 1: building the meryl kmer database
https://github.com/marbl/merqury/wiki/1.-Prepare-meryl-dbs
This was done manually (slurm script) not using GEP:
script here
STEP 2. Merqury: compare asms before and after purging
purging l0, k31
#BEFORE purging PRI (but after hifiasm):
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l0/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.p_ctg.fa
# BEFORE purging ALT:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l0/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l0_k31.asm.a_ctg.fa
# AFTER purging PRIM p1:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l0/hifiasm_l0_purge_dups/c1.l0.k31/purged_default.fa
# AFTER PURGING ALT q2 (q2 is the purged.fa after 2nd round of purge_dups), default:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l0/hifiasm_l0_purge_dups/c1.l0.k31/q2_default/purged_default.fa
purging l1, k31
#BEFORE purging PRI (but after hifiasm):
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l1/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l1_k31.asm.p_ctg.fa
# BEFORE purging ALT:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l1/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l1_k31.asm.a_ctg.fa
# AFTER purging PRIM p1:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l1/hifiasm_l1_purge_dups/c1.l1.k31/purged_default.fa
# AFTER PURGING ALT q2, default:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l1/hifiasm_l1_purge_dups/c1.l1.k31/q2_default/purged_default.fa
purging l2, k31
#BEFORE purging PRI (but after hifiasm):
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l2/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l2_k31.asm.p_ctg.fa
# BEFORE purging ALT:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l2/YelCar_hifiasm_new_ccs_DC_noAdaptOthers_l2_k31.asm.a_ctg.fa
# AFTER purging PRIM p1:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l2/hifiasm_l2_purge_dups/c1.l2.k31/purged_default.fa
# AFTER PURGING ALT q2, default:
/work/dominguez/YelCar_GenomeAssembly_MPI/asm/reads_trimmed_DC/hifiasm_l2/hifiasm_l2_purge_dups/c1.l2.k31/q2_default/purged_default.fa
running GEP (only worked in default mode-not slurm)
-first tried only comparing before and after purging of l0
files:
runEval_Hifiasm_l0.tsv
config.yaml
nohup snakemake --profile SUBMIT_CONFIG/default/ --cores 144 --jobs 20 --printshellcmds > output_020523_EvalMode.log 2>&1 &
hifiasm_l0_BEFOREpurging:
hifiasm_l0_AFTERpurging:
-now run the other asms: before and after purging of l1,l2
runEval_Hifiasm_l1_l2.tsv
config.yaml
module load lang/Anaconda3/2020.11
source activate GEP
nohup snakemake --profile SUBMIT_CONFIG/default/ --cores 144 --jobs 20 --printshellcmds > output_020523_EvalMode_l1_l2.log 2>&1 &
REPORT
hifiasm_l1_BEFOREpurging:
hifiasm_l1_AFTERpurging:
hifiasm_l2_BEFOREpurging:
hifiasm_l2_AFTERpurging:
BEST ASM: hifiasm_l2_AFTERpurging
1a. assembly mitochondrial genome: MitoHiFi, https://github.com/VGP/vgp-assembly/tree/master/mitoVGP
1b. annotate with MITOS webserver ? https://pubmed.ncbi.nlm.nih.gov/22982435/
2. Repeats annotation: https://www.gensas.org/tools
3. Genes Annotation using the taxonomically closest sp (GEMOMA) or ab initio prediction (MAKER2). Maker might be better as gemoma will ignore divergent genes. Use all protein database for vertebrates (or birds) in NCBI and find transcriptome data of close species.
4. Contamination check (megaBLAST, blobtools, KAT): DONE but not with blobtools (no permissions to install it on HPC, try in Alan).
5. Hi-C for scaffolding ($?, MPI-Dreden: 2.393,05 Euro for the pull-downs and library prep. For Illumina sequencing, calculate the desired coverage with 10,8e/Gb)
6. DeepVariant to polish: see https://www.science.org/doi/10.1126/sciadv.abm6494. After DV, use VCF outfile to run bcftools consensus to correct base errors.
7. circulomics DNA isolation of diuca sample