# lora 訓練與應用 * 新手教學:https://ivonblog.com/posts/stable-diffusion-webui-manuals/training/embedding/ * 訓練模型參考/教學: https://blog.csdn.net/leo0308/article/details/132511425 https://exp-blog.com/ai/sd-ru-men-09-lora-train/ https://devops.iii.org.tw/#/my-work ### **comfyUI workflow** https://drive.google.com/drive/folders/1t1Yz3ZU-qDH7nO8mUrBFCDIEhAbKAUZV?usp=sharing * SDt8 插件:https://zhuanlan.zhihu.com/p/579538165 ###### 工具開啟 * ComfyUI_old * conda activate comfyui_old * cd /home/ubuntu/ComfyUI/ * python main.py --7781 * ComfyUI * cd /home/ubuntu/ComfyUI/ * python main.py * ComfyUI-3D 1. 進到資料夾(ComfyUI_windows_portable),在資料夾路進輸入cmd 2. conda activate D:\ComfyUI\ComfyUI_windows_portable\python_miniconda_env\ComfyUI 3. .\run_nvidia_gpu_miniconda.bat * SD-trainer * conda activate lora * cd lora-scripts/ * ./run_gui.sh * Kohya * Magic Animate * conda activate manimate * cd magic-animate * ./run.sh * ![螢幕擷取畫面 2024-03-06 114924](https://hackmd.io/_uploads/BJiXsPST6.png) * Magic Animate-window * cd D:\magic-animate-for-windows-2.0 * venv\Scripts\activate * 生骨架 .\run_VidControlnetAux_gui.ps1 * 生影片 .\run_gui.ps1 * VideoReTalking * cd /home/ubuntu/ComfyUI/video-retalking/ * conda activate video_retalking * python webUI.py 參數設定:python3 inference.py --face[temp/video/]--audio[temp/audio/]--output[result] * DiffSynth-Studio * cd /home/ubuntu/DiffSynth-Studio/ * conda activate DiffSynthStudio * streamlit run DiffSynth_Studio.py * moore Animate-Anyone * cd /home/ubuntu/Moore-AnimateAnyone-master/ * source .venv/bin/activate * python app.py * 轉densepose * cd /home/ubuntu/Vid2DensePose/ * python Vid2DensePose.py * roop * roop/roop/ * activate py39 * python run.py --frame-processor=face_swapper --output-video-encoder libx265 --output-video-quality 0 --keep-fps --execution-provider cuda 提詞 No wings, face lens,white background,reality,40-year-old girl, crown, brown eyes, raditional clothing,gold embroidery,whole body,(stading:1.3),No base required, golden clothes,red clothes,bun hair ,Gorgeous headdress, <lora:mingStyle8:0.5>, <lora:Doorgods:0.3> nsfw,ugly,bad_anatomy,bad_hands,extra_hands,missing_fingers,broken hand,more than two hands,well proportioned hands,more than two legs,unclear eyes,missing_arms,mutilated,extra limbs,extra legs,cloned face,fused fingers,extra_digit, fewer_digits,extra_digits,jpeg_artifacts,signature,watermark,username,blurry,large_breasts,worst_quality,low_quality,normal_quality,mirror image, Vague,(wing:1.3) 1girl,whole body,face,(blue blazer, crew neck black t-shirt:1.3),Japanese pure short hair, black shoes, brunette, formal, full body, glasses, realistic, shoes, short hair,Side bangs, Show forehead,solo, standing, rectangular glasses, looking at viewer, necklace, realistic,short hair, white background, smile, watch, suit pants,no necklace |網頁 | | port | |:-------------------- | ---- | ---- | | Magic Animate-window | 影片 | 7887 | | Magic Animate-window | 骨架 | 7861 | | ConfyUI | | 8188 | | SD-trainer | | | | Magic Animate | | 7888 | | VideoReTalking | | | | DiffSynth-Studio | | | |Moore | | 7860| # 成果 ### 背背包+換動作 * SD webUI->局部重繪 | 人物 | 產品 | 生成 | | -------- | -------- | -------- | | ![S__70574091_1](https://hackmd.io/_uploads/By99AUV2a.jpg)|![porter_new_heat__1610416686_32218110_progressive-removebg-preview](https://hackmd.io/_uploads/SJTn0U4hT.png)| ![2159848483-1](https://hackmd.io/_uploads/Bkbw2mQnp.png)| * ComfyUI->openpose+局部重繪 | 局部重繪後 | 骨架圖 | 成品 | | -------- | -------- | -------- | | ![2159848483-1](https://hackmd.io/_uploads/Bkbw2mQnp.png ) | ![pose (1)](https://hackmd.io/_uploads/BJ1r27mha.png =75%x) | ![ComfyUI_00127_](https://hackmd.io/_uploads/rJfznmXhp.png )| ### 換衣服+影片生成 {%youtube 0Y28GwpRlyc %} ### 換臉 https://youtube.com/shorts/4VRNYw7PTi8 {%youtube -4VRNYw7PTi8 %} https://youtube.com/shorts/-25gJZD6LCQ?feature=share {%youtube 7XYez_WTIYU %} ## 資料處理 1. SD WebUI * 開啟SD WebUI → Extras → Batch from Directory * Scale to 調整裁切大小 * Caption 勾選Deepbooru 自動產生文本 (Deepbooru 分段文字,BLIP 一段文字) 2. SD-tarrger ★ ★ ★ * SD-tarrger * 影片參考: https://www.youtube.com/watch?v=xc_MmF4JjMU 3. SD-trainer WD1.4標籤器 * 附加提示詞會加在每一個文本的開頭 * **BooruDatasetTagManager** 文本編輯器 ## Embedding * Train → Create embedding → 輸入名字 → Number of vectors per token設定7以上 → 點選Create embedding * 切換至Train頁面,選擇剛剛建立的embedding,於Dataset directory輸入訓練資料的路徑(圖檔位置) → Prompt template file選style_filewords.txt。 → Mx Step設定訓練至10000步停止。也可以調高一點,並看預覽圖決定品質差不多之後才按Interrupt中止訓練 → 點選Train Embedding,開始訓練。 ## HyperNetwork 跟Embedding差不多 ## LoRA * 理想步數:1500(因人而異) ###### 操作 * 提詞 * 小工具 * 提詞生成器:http://www.atoolbox.net/Tool.php?Id=1101 * 提詞表:https://www.youtube.com/watch?v=3InV9zoX2rk * 反向提詞:nsfw,ugly,bad_anatomy,bad_hands,extra_hands,missing_fingers,broken hand,more than two hands,well proportioned hands,more than two legs,unclear eyes,missing_arms,mutilated,extra limbs,extra legs,cloned face,fused fingers,extra_digit, fewer_digits,extra_digits,jpeg_artifacts,signature,watermark,username,blurry,large_breasts,worst_quality,low_quality,normal_quality,mirror image, Vague * https://vocus.cc/article/647abaa6fd89780001e9bd59 * masterpiece,best quailty | 沒有加 |有加 | | -------- | -------- | | ![image](https://hackmd.io/_uploads/By_N97mhp.png) | ![image](https://hackmd.io/_uploads/Hy8X9QX3p.png) | ![00020-2316657800](https://hackmd.io/_uploads/SyuLbX43a.png)|![00021-2316657800](https://hackmd.io/_uploads/ByuvbQ436.png) * 關鍵字:越靠前權重越重 * 自訂權重:(key:1.3)不超過1.5 /1以上增加,反之減少 * 結合[A|B:步數/比例]未指定系統自動分配 * 初期訓練結果 | <lora:only_porterBag_207:1>,bag, fishnets | <lora:only_porterBag_207:1>,A man carrying a backpack,bag, fishnets | <lora:only_porterBag_207:1>,A man carrying a backpack,pink bag, fishnets | masterpiece,best quality,<lora:only_porterBag_207:1>,A man carrying a backpack,pink bag, fishnets| | -------- | -------- | -------- |-------- | | ![image](https://hackmd.io/_uploads/HJymPves6.png)| ![00017-3938798099](https://hackmd.io/_uploads/SyYsDPljp.png ) |![image](https://hackmd.io/_uploads/SJZrdPeip.png)|![image](https://hackmd.io/_uploads/HJsF0Deia.png)| * 增加精確度 https://zhuanlan.zhihu.com/p/616837063 1. 摳掉背景:圖像的訓練沒有那麽智能,所以如果lora模型是**人物模型**,那麽訓練集圖片就不應該有任何背景(也可使用純白背景),需要摳圖扣掉。如果有一些很顯眼你又不想要的特征的話,在此步驟順便直接PS掉是最快捷的辦法。 2. 打亂順序:**訓練集中圖片的位置和順序,對訓練效果有影響**。如果訓練集中有幾類服飾,而你又不希望任何一類服飾在模型中過於有存在感的話,應該通過重命名工具,在訓練集預處理前就**把圖片順序隨機化**,至少不能有連續四張圖有你不想保留的共同特征,這樣預處理後的訓練集順序就是打亂的,再拿去訓練就可以只學會身材和相貌,但不讓模型輕易學會某套衣服。 【反之,如果想要同一特徵明顯,訓練集圖片有相同特徵可放一起(應該】 3. 鏡像翻轉:如果訓練出的lora模型跑的圖,**左右臉總是不太對稱**,或者總是面朝一側,那麽就應該在生成訓練集的時候,生成水平翻轉的圖像。這樣訓練出來的模型就不會有左右的傾向性了。 4. 重覆次數:訓練集圖片訓練的重覆次數,通常大家都推薦**6**次,防止過擬合。但如果使用了鏡像翻轉,導致相同的圖片實際上訓練了兩次,那麽我感覺就可以把重覆次數改為三次。 5. 學習率:學習率本身,看了青龍聖者的攻略,就是先用Dadatptation優化器跑模型(跑開頭幾分鐘取到學習率就就可以停止了)、同時用TensorBoard監視訓練日志,等到學習率一穩定,就記錄下來,除以3之後換用Lion優化器跑。 * 增加解析度:https://koding.work/generate-high-resolution-images-with-stable-diffusion/ * 控制人物姿勢:https://vocus.cc/article/640ee135fd8978000155ef23 * 立體人偶-控制姿勢:https://vocus.cc/article/64a50f75fd8978000165c9df 網站-posemy:https://app.posemy.art/ ###### webUI * 圖生圖: 1. inpaint sketch:局部重繪(生成)-生成題詞的圖,再把塗抹範圍位置co出來,較適用於生成新東西 ![2159848483-1](https://hackmd.io/_uploads/HJ56Yuhsa.png) 2. inpaint:局部修改 3. sketch:自己畫草稿 4. img2img:全圖修改 * Denoising strength調低的情況下可以用來微調原圖。 file://wsl.localhost/Ubuntu/home/ubuntu/ComfyUI/output/ComfyUI_00128_.webp * 文生圖 ![938415583](https://hackmd.io/_uploads/SyVRtu2sp.png) 形狀位置大致相同、缺少細節 * 過擬合會導致畫面細節丟失、畫面模糊、畫面發灰、邊緣不齊、無法做出指定動作、在一些大模型上表現不佳等情況-->泛化性差 * 換衣服:https://www.youtube.com/watch?v=uf2UeMrPKn8 ## 圖生影片 * img2video * 文章 https://stable-diffusion-art.com/stable-video-diffusion-img2vid/ * Colab https://colab.research.google.com/github/sagiodev/stable-diffusion-img2vid/blob/main/stable_video_diffusion_img2vid.ipynb * discord https://www.youtube.com/watch?v=FuAElw8acv8 * 轉denthpose https://colab.research.google.com/drive/1x77dESn7EGPCqjKdQ1sJJhNu0Cf-5Gpt?usp=sharing#scrollTo=k1ak_mbRJlpV * MagicAnimate * 線上 https://huggingface.co/spaces/zcxu-eric/magicanimate * 原圖 轉影片 {%youtube NsjAtU_N-jA %} * 換衣後-全身 原圖無下半身,AI自由想像,衣服模型為長洋裝 {%youtube rvUL1Uco9Hs %} * 換衣後-半身{%youtube 0Y28GwpRlyc %} 褲裝換裙裝-->bug! | mask 衣服位置(自動偵測) |inpaint 大區塊(手動) | | -------- | -------- | |![00035-2556022378](https://hackmd.io/_uploads/H1d37lya6.png)| ![00034-108713149](https://hackmd.io/_uploads/H1MoXlJTT.png) | * **困難點:換衣服的區塊由小到大無法自動偵測** * solution:mask改為保留區域,對mask以外的區塊重繪(測試 * 精準換衣https://zhuanlan.zhihu.com/p/656933565 ### dreambooth * 訓練 * https://www.youtube.com/watch?v=MiDdEaS6lqI * https://www.youtube.com/watch?v=LGXRHe0VnLY * https://github.com/Akegarasu/dreambooth-autodl for code ### 面部修復 * https://www.patreon.com/posts/v3-0-lipsync-fix-99387166 https://h9856.gameqb.net/prompt-pose-hands/ https://www.aifure.com/stable-diffusion%E6%89%8B%E9%83%A8%E7%BB%86%E8%8A%82%E4%B8%8D%E5%AE%8C%E7%BE%8E%E6%80%8E%E4%B9%88%E5%8A%9E%EF%BC%8C-%E5%AE%8C%E7%BE%8E%E8%A7%A3%E5%86%B3ai%E4%B8%8D%E4%BC%9A%E7%94%BB%E6%89%8B%E9%97%AE/ ### 影音+口型 * https://github.com/OpenTalker/video-retalking?tab=readme-ov-file * https://replicate.com/cjwbw/video-retalking * https://www.youtube.com/watch?v=HnU_59nNx8w * https://lalamu.studio/demo/demo.html#page1 ### animatediff * https://blog.hinablue.me/comfyui-animatediff-controlnet-keyframe-prompt-travel/ * Magic Animate * 圖生影片 * 1:1生成 * VideoReTalking * 臉部偵測 * 參數設定:python3 inference.py --face[temp/video/]--audio[temp/audio/]--output[result] * ComfyUI 影片生成 * 圖片不超過24張 * Moore-AnimateAnyone * https://github.com/MooreThreads/Moore-AnimateAnyone * 圖片與影片的尺寸盡量一致/比例相同 ###### TensorBoard lora模型訓練監測學習情況 https://www.youtube.com/watch?v=PaYsHlqiVAE https://www.youtube.com/watch?v=mYwvATBQXSM&t=155s https://github.com/s0md3v/roop ### 衣服訓練 * https://www.bilibili.com/video/BV1VT411t7Na/?vd_source=d060fe4dd5a6683127f1df9c93c14b1f * 至少5張