# Restoration SD Workflow
This is a restoration process I used with stable diffusion. My goal was more of an "artistic restoration" than a 1:1 so I was ok with some things slightly changing. This covers the idea of the process I used, but doesn't go into detail with tool installation or direct usage in automatic1111.
## Where I started and where I wound up
I had a photograph of my great great great grandparents -- I believe the photo is from the 1850-1860's or so. I also believe it's a "crayon enlargement", aka [photo-crayotype](https://en.wikipedia.org/wiki/Photo-crayotype)

I just worked with a camera phone photo because I didn't want to damage it, and I could get a clearer photo.
And I wound up with a photo like this:

## Tools I used
* Automatic1111
* StableDiffusion
* ControlNet
* [Elegance Model](https://civitai.com/models/5564/elegance)
* An image editor (doesn't matter which)
* A display tablet (optional)
## My process
### Initial image processing
First thing I did was crop it, set an aspect ratio I was going to work with (I decided to go with 1:1), and then I used a "auto" filter in my image editor to kind of set the contrast.
I wound up with a photo like this.

You can see where the "crayon" process was used for the original a little better now, especially the hair of both subjects, and both eyes.
I actually tried to generate from here. But, I wasn't getting the results that I wanted. So I painted out the speckled background. Sketched in some extention of the subjects, and also added a kind of "cloth backdrop" to it (which I generated in midjourney and then cut to fit)
So I wound up with this:

### Prompting
Next, I brought it into img2img, both as the source image and also used it to set a control net and used the canny preprocessor and a canny model. I didn't do much to change defaults.
I started to put together a prompt, and after a number of iterations, this was the winner:
```
Professional Full body photo of a middle aged handsome dutch man and a beautiful dutch woman, arms around one another, white sheet backdrop, 1880s, 1880s new york city fashion, bowtie, dress with belt, high class clothing, chic, elegant, detailed eyes, (detailed skin, supple skin pores), (portrait), natural lighting, (backlighting:0.6), shallow depth of field, 8mm film grain, photographed on a Leica 10772 M-P, 50mm lens, F2.8, (highly detailed, intricate details, fine), 8k, HDR, deep focus, depth of field, albumin print photography by Dorothea Lange, Alasdair McLellan, Anders Petersen
Negative prompt: (oversaturated:1.3), bad hands, lowers, 3d render, cartoon, long body, ((blurry)), duplicate, ((duplicate body parts)), (disfigured), (poorly drawn), (extra limbs), fused fingers, extra fingers, (twisted), malformed hands, ((((mutated hands and fingers)))), contorted, conjoined, ((missing limbs)), logo, signature, text, words, low res, boring, mutated, artifacts, bad art, gross, ugly, poor quality, low quality, ((beard))
Steps: 100, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 512x512, Model hash: e3055c494a, Model: elegance_37, Denoising strength: 0.85, Mask blur: 4, ControlNet Enabled: True, ControlNet Module: canny, ControlNet Model: controlnetPreTrained_cannyV10 [e3fe7712], ControlNet Weight: 1, ControlNet Guidance Start: 0, ControlNet Guidance End: 1
```
I borrowed some of the prompt terms from examples from the Elegant model.
Here's a preliminary run -- this is before I added the backdrop and the extra drawing.

Then, I fiddled with the denoising amount on the source image mostly, and I didn't change much for controlnet, maybe I lowered a threshhold a little.
Once I started getting half decent results, I started to do a large number of runs. I generated in the neighborhood of 100 and went from there.
Then, eventually I picked a "winner" to base edits on.
### Inpainting
I chose this one because the dress was pretty close, her face was pretty nice, and my great great great grandfather was mostly there.

I didn't love his face though, so I found another run with a face that I liked, and I replaced it.
In this next photo you can see how I replaced two things to work on in-painting.
* The face with a new face, but not perfect
* Then, I redrew the hand.

TIP: Make sure you use layers in your editor. This is how I was able to extract this work to share with you for later. It's helpful to keep iterating using the layers and going back and forth between image editor and inpainting in SD.
I would then use img2img inpaint, and I'd first draw around his face to fix the kind of "seam" that I left.
Then I would inpaint over the hand.
For both of these, I used a fairly low denoising strength, maybe .3 or .4.
I feel like the hand "was ok-ish" but not perfect at this point. But the power of inpainting is awesome, I get a free upgrade on my paint job...

I kind of think it's still a little dark. But overall, I'm happy with it. SD still has hand problems, and the starting data to work with was really tough for the hand too. It was probably the most challenging part.
I also did this for a few other details, including the belt and the bow tie.
TIP: Generate a bunch of runs for your inpaintings and pick the best ones! After I test a prompt a couple of times, I run a batch of a dozen or so and I see if I have a winner.
## Where I wound up
The final image.

A before and after view
