###### tags: `AI`
# Stable Diffusion mit ControlNet
Die Pakete sind sehr groß (> 30 GB!) und sollten vorher geladen werden! Auch wenn irgendwas mit der Installation nicht klappt, sollten die Models und der Checkpoint + VAE schon gedownloaded sein.
## Dependencies
+ 40GB diskspace
+ [GIMP](https://www.gimp.org/downloads/) installieren
+ [Git](https://git-scm.com/downloads) installieren
+ [Python 3.10](https://www.python.org/downloads/release/python-31011/) installieren (Python zum PATH hinzufügen anwählen !VORSICHT FALLS SCHON EIN PYTHON INSTALLIERT IST!)
## Stable Diffusion
* [Stable Diffusion](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
+ Windowstaste drücken und "PowerShell" starten
+ `mkdir git`
+ `cd git`
+ `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`
+ `.\webui-user.bat`
+ Warten bis alles installiert ist und dann `http://127.0.0.1:7860/` im Browser aufrufen
## ControlNet
* [ControlNet](https://github.com/Mikubill/sd-webui-controlnet)
+ In GUI: Extensions > Install from URL: https://github.com/Mikubill/sd-webui-controlnet.git > Install
+ In GUI: Extensions > Installed > Apply and restart UI
* [ControlNet Models](https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main)
+ Download \*.pth in stable-diffusion-webui\extensions\sd-webui-controlnet\models (you have to press the ↓ )
* Settings > ControlNet > Multi ControlNet: Max models amount: 2
## Checkpoint
* [Checkpoints AyoniMix](https://civitai.com/models/4550/ayonimix)
+ Save download in stable-diffusion-webui\models\Stable-diffusion
* [VAE](https://stable-diffusion-art.com/how-to-use-vae/)
+ Save download [VAE](https://civitai.com/api/download/models/21877?type=VAE) in stable-diffusion-webui\models\VAE
+ in GUI: Settings > Stable Diffusion > SD VAE = `vae-ft-mse-840000-ema-pruned.vae.ckpt`
## Tweek
* If NVIDIA Grafics: add "--xformers" to webui-user.bat (`set COMMANDLINE_ARGS= --xformers`)
## Settings
### txt2img
#### Positive prompt
colorful, saturated, ultra detailed face and eyes, nostalgia, cinematic, moody, dramatic lighting, photo, majestic, oil painting, high detail, soft focus, golden hour, bokeh, centered, rimlight
#### Negative prompt
cartoon, 3d, zombie, disfigured, deformed, extra limbs, b&w, black and white, duplicate, morbid, mutilated, cropped, out of frame, extra fingers, mutated hands, mutation, extra limbs, clone, out of frame, too many fingers, long neck, tripod, photoshop, video game, tiling, cut off head, patterns, borders, frame, symmetry, intricate, signature, text, watermark
#### Sampling method
DDIM
#### Sampling steps
Improves the result (i.e. 30-50)
#### Width & height
Same ratio then input images, i.e. 640x360 for later 1280x720
#### Batch count
How many results (i.e. 4)
#### Batch size
How many results at one time (i.e. 1)
#### CFG Scale
Creativity: lower more creative, higher less (i.e. 5 - 12)
### ControlNet
#### Unit 0
* enable
* low VRAM (if < 16GB GRAM)
* Preprocessor: Canny, Model \*canny
* Preprocessor in ratio of input image
#### Unit 1
* enable
* low VRAM (if < 16GB GRAM)
* Preprocessor: Openpose, Model \*openpose
* Preprocessor in ratio of input image
### img2img
* default prompts and setting same as txt2img
* inpaint areas with new promts
* Inpaint area: Only masked
* enable "restore faces" if inpaint faces
* do not forget img ratio
* do not forget to enable ControlNet
### Extras
* Resize: 2 - 4
* Upscaler 1: R-ESRGAN 4x+
## Learning Ressources
[Youtube Tutorial](https://www.youtube.com/watch?v=dLM2Gz7GR44&t=950s)