# 待開源 * https://mdnice.com/writing/68ce8d0aa0574184a4fc9ad96309f328 * Outfit Anyone * [tryondiffusion](https://github.com/tryonlabs/tryondiffusion) * [diffuse2choosehttps](https://diffuse2choose.github.io/) # [OminiControlGP](https://github.com/deepbeepmeep/OminiControlGP) # [Wan2.1](https://github.com/Wan-Video/Wan2.1)(2/25開源) * 照github安裝即可 * conda activate wen * python generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." ## comfyui * prompt * 擁抱:best quality video of a handsome boy and a cute girl embrace with each other * 親吻:best quality video of a handsome boy and a cute girl embrace and kiss, * 握手:best quality video of a handsome boy and a girl Shake hands with each other ## [加速版Wan2GP](https://github.com/deepbeepmeep/Wan2GP) # MMAudio * python demo.py --duration=8 --video=<path to video> * python gradio_demo.py # [PuLID](https://github.com/ToTheBeginning/PuLID) * [ERROR: Exception in ASGI application](https://github.com/ToTheBeginning/PuLID/issues/61) * python app_v1_1.py --base F:\GenAI\PuLID\models\Juggernaut-XL-v9 # [Seed-VC](https://github.com/Plachtaa/seed-vc/blob/main/README-ZH.md) # [ IMAGDressing](https://github.com/v3ucn/IMAGdressing_WebUi_For_Windows) * 改運行.bat跟app.py中的python路徑 * 安裝insightface * 安裝onnxruntime-gpus * 安裝cuda --- * 改程式碼 * IMAGdressing_WebUi_For_Windows\dressing_sd\pipelines\IMAGDressing_v1_pipeline_ipa_controlnet.py加上這行 --- * from diffusers.loaders import LoraLoaderMixin --- * 改 F:\GenAI\IMAGdressing_WebUi_For_Windows\inference_IMAGdressing_controlnetinpainting.py裡的183行 * openpose_model = OpenPose(1) -> openpose_model = OpenPose(0) --- * pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple * pip install basicsr --- * 改F:\anaconda3\envs\IMAGDressing_window\Lib\site-packages\basicsr\data\degradations.py裡的第8行 * from torchvision.transforms.functional_tensor import rgb_to_grayscale -> from torchvision.transforms._functional_tensor import rgb_to_grayscale * https://mirrors.aliyun.com/pypi/simple ### 沒有這個檔案 python inference_IMAGdressing_counterfeit-v30.py --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" --model_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\model.jpg" ### 指定臉型和姿勢穿著指定的衣服 可用 python inference_IMAGdressing_ipa_controlnetpose.py --face_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\model.jpg" --pose_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\model.jpg" --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" ### 指定模特兒穿指定衣服(實驗功能) python inference_IMAGdressing_controlnetinpainting.py --model_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\model.jpg" --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" * 提詞無效 ### 隨機臉孔使用給定的姿勢來穿著給定的服裝 python inference_IMAGdressing_controlnetpose.py --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" --pose_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\model.jpg" ### 隨機的臉孔和姿勢來穿上指定的衣服 可用 python inference_IMAGdressing.py --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" ### 卡通 python inference_IMAGdressing_cartoon_style.py --cloth_path "F:\GenAI\IMAGdressing_WebUi_For_Windows\assets\images\garment\c1.png" # AnyV2V * https://github.com/TIGER-AI-Lab/AnyV2V * 開啟: * cd D:\III_GAI_DIY\GenAI\AnyV2V * conda activate anyv2v-i2vgen-xl * python gradio_demo.py * 使用: * 預處理影片階段:動作影片 * 影像編輯階段:根據第一禎圖片下提詞重繪 * 影片編輯階段:指定動作 * 建議:影像編輯用同類型 ex.男性生成男性 --- # Fatezero * https://github.com/ChenyangQiQi/FateZero * conda activate fatezero38 --- # MimicMotion * https://github.com/Tencent/MimicMotion * 開啟: * cd D:\III_GAI_DIY\GenAI\MimicMotion * conda activate mimicmotion * 遇到huggingface登入問題: * `huggingface-cli login` * shift+insert huggingface token * python inference.py --inference_config configs/test.yaml --- # StoryDiffusion * https://github.com/HVision-NKU/StoryDiffusion --- # T2X * https://github.com/Alpha-VLLM/Lumina-T2X * pip install flash_attn-2.4.1+cu121torch2.1cxx11abiFALSE-cp311-cp311-win_amd64.whl * https://github.com/bdashore3/flash-attention/releases * cuda python win 都要正確 ### 開啟4090 【t2i】(未完成) * conda activate Lumina_T2X * python -u demo.py --ckpt "D:\III_GAI_DIY\GenAI\Lumina-T2X\Lumina-T2I" 【t2i_mini】 * conda activate t2x * python -u demo.py --ckpt "D:\III_GAI_DIY\GenAI\Lumina-T2X\Lumina-Next-SFT 【t2music】(fail) * conda activate Lumina_T2X_music * cd Lumina-T2X/lumina_music ### 開啟3080 【t2i_mini】 * conda activate Lumina_T2X_mini * cd Lumina-T2X/lumina_T2X_mini * python -u demo.py --ckpt "D:\Lumina-T2X\lumina_next_t2i_mini\Lumina-Next-SFT" --hf_token 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 【t2music】(linux) * conda activate Lumina_T2X_music * cd Lumina-T2X/lumina_music * bash run_music.sh ### 安裝錯誤 * TCPStore() RuntimeError:格式字串中不符合的“}” https://github.com/pytorch/pytorch/issues/118378 127.0.0.1->localhost * RuntimeError: Distributed package doesn't have NCCL built in * nccl->gloo * Cannot access gated repo for url https://huggingface.co/google/gemma-2b/resolve/main/config.json. Access to model google/gemma-2b is restricted. You must be authenticated to access it. * 開啟加上 --hf_token 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 * lumina_next convert D:/III_GAI_DIY/GenAI/Lumina-T2X/Lumina-T2I/model_args.pth D:/III_GAI_DIY/GenAI/Lumina-T2X/Lumina-T2I --- # VGen 開啟 * conda activate vgen-py310 * cd D:\III_GAI_DIY\GenAI\VGen * python gradio_app.py 安裝 * 參考:https://github.com/ali-vilab/VGen/issues/142 * 更改requirements.txt * Triton install:https://github.com/PrashantSaikia/Triton-for-Windows * 下載triton-2.0.0-cp310-cp310-win_amd64.whl * pip install triton-2.0.0-cp310-cp310-win_amd64.whl * 把所有.yaml中的 NCCL 改成gloo * 在 Python 3.10 和 miniconda3 運行 * 跟随日志報錯来解决其他錯误 --- # [MagicClothing](https://github.com/shinechen1024/magicclothing) * conda activate magicloth * 衣服+提詞:python gradio_generate.py --model_path "D:\III_GAI_DIY\GenAI\MagicClothing\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" * 衣服+提詞+controlnet:python gradio_controlnet_openpose.py --model_path "D:\III_GAI_DIY\GenAI\MagicClothing\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" * 衣服+提詞controlnet+ipadapter:python gradio_ipadapter_openpose.py --model_path "D:\III_GAI_DIY\GenAI\MagicClothing\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" --enable_cloth_guidance * 模型:https://github.com/ShineChen1024/MagicClothing/issues/65 --- # [oms-Diffusion](https://github.com/ShineChen1024/MagicClothing/tree/earlyAccess?tab=readme-ov-file) python gradio_generate.py --model_path "D:\III_GAI_DIY\GenAI\oms-Diffusion\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" python inference.py --cloth_path "D:\III_GAI_DIY\GenAI\oms-Diffusion\valid_cloth\t1.png" --model_path "D:\III_GAI_DIY\GenAI\oms-Diffusion\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" python gradio_ipadapter_faceid.py --model_path "D:\III_GAI_DIY\GenAI\oms-Diffusion\checkpoints\magic_clothing_768_vitonhd_joint.safetensors" python gradio_animatediff.py --cloth_path "D:\III_GAI_DIY\GenAI\oms-Diffusion\valid_cloth\t1.png" --ckpt_dir "D:\III_GAI_DIY\GenAI\ComfyUI_windows_portable\ComfyUI\models\animatediff_models\AnimateLCM_sd15_t2v.ckpt" * 錯誤:huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'D:/III_GAI_DIY/GenAI/oms-Diffusion/checkpoints/ipadapter_faceid/ip-adapter-faceid-plus_sd15_lora.safetensors'. Use `repo_type` argument if needed. * 改為絕對路徑 * 與magicloth共用模型 --- # [Live_Portrait_Monitor](https://github.com/Mrkomiljon/Live_Portrait_Monitor) * conda activate LivePortrait2 * to use monitor `python inference_monitor.py -s assets/examples/source/MY_photo.jpg` * disable pasting back `python inference_monitor.py -s assets/examples/source/s9.jpg -d assets/examples/driving/t.mp4 --no_flag_pasteback` * to use original code for inference ``` python inference_org.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback ``` * 一鍵開啟(無法使用) * "D:\III_GAI_DIY\Live_Portrait_Monitor_High_FPS_window\Monitor.bat" --- # [Webcam_Live_Portrait](https://github.com/Mrkomiljon/Webcam_Live_Portrait) ## 開這個 * conda activate LivePortrait2 * cd D:\III_GAI_DIY\Webcam_Live_Portrait **即時監控使用webcam** `python inference.py -s assets/examples/source/MY_photo.jpg` **螢幕錄影** `python screen.py -s assets/examples/source/MY_photo.jpg` **螢幕錄影+開webcam** `python test.py -s assets/examples/source/MY_photo.jpg` (3080) * conda activate LivePortrait * cd D:\Webcam_Live_Portrait ## 測試 * 眼睛過大扎眼效果不好 * 可用卡通:白雪公主、長髮公主、娜美、魔法公主、霍爾的移動城堡-蘇菲 * 不可用卡通(動物版可以用、眼睛不會動):蠟筆小新、皮卡丘、小丸子、神奇寶貝小智、真珠美人魚、熊貓 * 結論:要有鼻子、基本五官要有 # [LivePortrait](https://github.com/KwaiVGI/LivePortrait) * conda activate LivePortrait2 * 有gradio `python app.py `人物 `python app_animals.py `動物 * 影片-圖片-影片 `python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4` * 影片-影片-影片 `python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4` * gpu error:https://github.com/KwaiVGI/LivePortrait/issues/290 * 一鍵安裝包(開啟gradio): * animal:run_windows_animal.bat * human:run_windows_human.bat ## 開啟 * conda activate lpanimal * cd D:\III_GAI_DIY\LivePortrait-animal --- # [COTTON-size-does-matter](https://github.com/cotton6/COTTON-size-does-matter) ## 常見安裝 * Triton-for-Windows * https://github.com/PrashantSaikia/Triton-for-Windows * whl檔在D:\III_GAI_DIY\GenAI * pip install D:\III_GAI_DIY\GenAI\triton-2.0.0-cp310-cp310-win_amd64.whl python create_seal_api.py -t 小吉籤詩預喜訊,福德漸滿事漸行,進步雖緩現成效,就待春風報喜來 -p D:\III_GAI_DIY\GenAI\Lumina-T2X\lumina_next_t2i_mini\output\e5431b8f9a4d4f0dbb17ba55905371f0.jpg --- # [FasterLivePortrait](https://github.com/warmshao/FasterLivePortrait) * [window整合包](https://drive.google.com/file/d/1fRJ5AJeXLLrI2q5lgnxE42CK04FQ29hw/view) 1. 下載後解壓縮 2. 進入FasterLivePortrait-windows後點擊all_onnx2trt.bat對onnx文件進行轉換 3. 更換目標圖片進入命令行運行 ``` bash camera.bat assets/examples/source/s9.jpg ```
×
Sign in
Email
Password
Forgot password
or
Sign in via Google
Sign in via Facebook
Sign in via X(Twitter)
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
Continue with a different method
New to HackMD?
Sign up
By signing in, you agree to our
terms of service
.