# 驗測模型所需要的硬體資源 ## :memo: Where do I start? - 聯絡窗口 Email us : 2303117@narlabs.org.tw 王小姐 ### 事前準備 - 映像檔下載 ``` singularity pull c00cjz00/c00cjz00_cuda11.8_pytorch:2.1.2-cuda11.8-cudnn8-devel-llama_factory ``` - 原生安裝所需套件 ``` # 套件安裝ubuntu相關套件 apt install libfontconfig libaio-dev libibverbs-dev jq # 安裝 Llama-factory 相關套件 pip install llmtuner==0.5.3 deepspeed==0.13.1 bitsandbytes==0.42.0 opencc opencc-python-reimplemented ``` ## 硬體資源需求預估 ### Deepspeed Stage2, 7B ``` from transformers import AutoModel from deepspeed.runtime.zero.stage_1_and_2 import estimate_zero2_model_states_mem_needs_all_live model = AutoModel.from_pretrained("/work/u00cjz00/slurm_jobs/github/models/Llama-2-7b-chat-hf") estimate_zero2_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1) estimate_zero2_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1) estimate_zero2_model_states_mem_needs_all_live(model, num_gpus_per_node=4, num_nodes=1) estimate_zero2_model_states_mem_needs_all_live(model, num_gpus_per_node=8, num_nodes=1) estimate_zero2_model_states_mem_needs_all_live(model, num_gpus_per_node=8, num_nodes=2) ``` ![image](https://hackmd.io/_uploads/rJzSaLnaa.png) ### Deepspeed Stage3, 7B ``` from transformers import AutoModel from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live model = AutoModel.from_pretrained("/work/u00cjz00/slurm_jobs/github/models/Llama-2-7b-chat-hf") estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1) estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=2) #estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=4) #estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=8) #estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=4, num_nodes=2) #estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=8, num_nodes=2) ``` ![image](https://hackmd.io/_uploads/Hy_r3InaT.png)