# NYCU-IAIA-DL2024 Performance Comparison ## Comparison | Model | HuggingFace | Size | Validation | Public | Private | Total(Public0.3 + Private0.7) | | ---------------------- | ----------------------------------------- | ----- | ---------- | ------- | ------- | ----------------------------- | | Chinese-LLaMA-Alpaca-3 | N/A | N/A | N/A | 0.92 | 0.92 | 0.92 | | Llama3-Chinese-Chat | N/A | N/A | N/A | 0.90666 | 0.90714 | 0.906996 | | TAIDE_Llama3 | taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 | 8.03B | 0.9535 | 0.86333 | 0.90285 | 0.890994 | | Breeze | MediaTek-Research/Breeze-7B-Instruct-v1_0 | 7.49B | 0.94 | 0.89 | 0.89142 | 0.890994 | | Llama3 | meta-llama/Meta-Llama-3-8B-Instruct | 8.03B | 0.939 | 0.8433 | 0.89285 | 0.877985 | | Gemma | google/gemma-1.1-7b-it | 8.54B | 0.93 | 0.82 | 0.85857 | 0.846999 | | Taiwan LLM | yentinglin/Taiwan-LLM-7B-v2.1-chat | 6.74B | 0.928 | 0.84666 | 0.84571 | 0.845995 | | TAIDE | taide/TAIDE-LX-7B-Chat | 6.94B | 0.92 | 0.79 | 0.79714 | 0.794998 | | Mistral | mistralai/Mistral-7B-Instruct-v0.2 | 7.24B | 0.8915 | 0.73333 | 0.75857 | 0.750998 | ## Training Configuration ### 312581029 廖永誠 #### System Configuration - **OS:** Ubuntu 22.04 - **DRAM:** 128GB - **GPU:** A5000 (24GB) * 2 #### Script and Framework - **SFTTrainer** from trl library - **DataCollatorForCompletionOnlyLM** from trl library - **Memory Efficient:** - fp16 (SFTTrainer class) - LoRA (peft library) - DeepSpeed ZeRO-1 (DeepSpeed & Huggingface integration) #### Other Configuration | Parameter | Value | |--------------------------------|--------| | LORA_ALPHA | 128 | | LORA_DROPOUT | 0.1 | | LORA_RANK | 64 | | MAX_SEQ_LENGTH | 4096 | | TRAIN_BATCH_SIZE | 1 | | GRADIENT_ACCUMULATION_STEPS | 16 | | WARMUP_STEPS | 700 | | LEARNING_RATE | 1e-4 | | WEIGHT_DECAY | 0.01 | | NUM_TRAIN_EPOCHS | 3 | #### Training Time - **2.5 hours** ### 411652004 張哲嘉 #### System Configuration - **OS:** Ubuntu 22.04 - **GPU:** 4090 (24GB) #### Script and Framework - https://nbviewer.org/gist/411652004/a4ac401de8d7f87411741e68b1ea6dd0 - **SFTTrainer** from trl library - **DataCollatorForCompletionOnlyLM** from trl library - **Memory Efficient:** - bf16 (SFTTrainer class) - QLoRA (peft library) #### Other Configuration | Parameter | Value | |--------------------------------|--------| | LORA_ALPHA | 16 | | LORA_DROPOUT | 0 | | LORA_RANK | 16 | | MAX_SEQ_LENGTH | 2048 | | TRAIN_BATCH_SIZE | 2 | | GRADIENT_ACCUMULATION_STEPS | 1 | | WARMUP_STEPS | 0 | | LEARNING_RATE | 1e-5 | | WEIGHT_DECAY | 0.01 | | NUM_TRAIN_EPOCHS | 1 | | LR_SCHEDULER_TYPE | cosine | #### Training Time - **1:34:43** ### a122133 王斾頤 #### System Configuration 使用 **Colab pro** 環境進行訓練 - **OS:** Ubuntu 22.04 - **GPU:** V100-SXM2-16G - **Max memory:** 15.773 GB #### Script and Framework - **SFTTrainer** from trl library - **FastLanguageModel** from unsloth library - **Memory Efficient:** - fp16 (SFTTrainer class) - LoRA (unsloth library) #### Other Configuration | Parameter | Value | |--------------------------------|--------| | LORA_ALPHA | 256 | | LORA_DROPOUT | 0.1 | | max_seq_length | 4096 | | r (lora_rank) | 128 | | lora_dropout | 0.1 | | warmup_steps | 200 | | num_train_epochs | 1 | | learning_rate | 2e-5 | | gradient_accumulation_steps | 4 | | weight_decay | 0.01 | #### Training Time * **2:47:53**
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up