# Stable Diffusion and ControlNet for Spaces
(for people, models coming soon with posing)
### Requirements
1. Stable Diffusion WebUI (Automatic1111)
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
2. ControlNet
- https://github.com/Mikubill/sd-webui-controlnet
3. Computer with GPU
__________________________________________________________________
## Replicating Images
## Preprocessors
### Depth Preprocessor
- ControlNet allows you to stack preprocessors. In most of these, I have Depth on the ControlNet Unit 1 tab.
- The depth Preprocessor generates "a grayscale image with black representing deep areas and white representing shallow areas."

### "Canny" Preprocessor
- Canny functions by making "a monochrome image with white edges on a black background."

### MLSD Preprocessor
- MLSD functions much like Canny, but makes a "monochrome image composed only of white *straight lines* on a black background." Great for architecutral work

### Reference Preprocessor
- Reference "can guide the diffusion directly using images as references."
- The amount that SD takes your reference into account can be chosen with the "contol mode" buttons and with the "Starting Contol Step" and "Ending Control Step" which determine how long the diffusion uses your reference
- 0 - 1 uses your reference the entire time, 0 - .5 uses it from the beginning to half-way through, etc.

## Examples (in order of fidelity to the original)
Universal Prompt

Base Image

Canny + Depth Preprocessors

Canny (only) preprocessor

MLSD + Depth Preprocessor

MLSD (only) Preprocessor

Reference (control step 0-1) + Depth Preprocessors

Reference (control step 0-.5) + Depth Preprocessors
