
## Intro
Welcome to episode fourteen! This is your host, Doug Smith. This is Not An AI art podcast is a podcast about, well, AI ART – technology, community, and techniques. With a focus on stable diffusion, but all art tools are up for grabs, from the pencil on up, and including pay-to-play tools, like Midjourney. Less philosophy – more tire kicking. But if the philosophy gets in the way, we'll cover it.
But plenty of art theory!
Today we've got:
* Model madness model reviews: On 2 LoRAs
* Bloods and crits: On 3 pieces
* Technique of the week: Using a color pallete!
* My project update: So you can learn from it!
Bunch of news, a PSA, but no art crits -- I'm late to record, and I was out camping all weekend, and while it was glorious, I am now behind on all my hustles!
Available on:
* [Spotify](https://open.spotify.com/show/4RxBUvcx71dnOr1e1oYmvV)
* [iHeartRadio](https://www.iheart.com/podcast/269-this-is-not-an-ai-art-podc-112887791/)
* [Google Podcasts](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9kZWY2YmQwOC9wb2RjYXN0L3Jzcw)
Show notes are always included and include all the visuals, prompts and technique examples, the format is intended to be so that you don't have to be looking at your screen -- but the show notes have all the imagery and prompts and details on the processes we look at.
## News!
## Model madness
### Serenity
[From civitai](https://civitai.com/models/110426/serenity)
[On Reddit](https://www.reddit.com/r/StableDiffusion/comments/151epgp/new_photorealistic_base_model_serenity/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
### Skin LoRA
[From civitai](https://civitai.com/models/109043?modelVersionId=122580)
Decent results -- this is worth returning to.
```
close-up portrait of a 1920s flapper, (detailed skin:1.1), highly detailed background, perfect lighting, best quality, 4k, 8k, ultra highres, raw photo in hdr, sharp focus, intricate texture, best quality, 4k, 8k, ultra highres, sharp focus, intricate texture <lora:polyhedron_new_skin_v1.1:0.25>
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1730087582, Face restoration: CodeFormer, Size: 512x512, Model hash: 47170319ea, Model: juggernaut_final, Denoising strength: 0.52, Hires upscale: 1.5, Hires upscaler: Latent
```


### Pixelator LoRA
[On Reddit](https://www.reddit.com/r/StableDiffusion/comments/153avo5/pixel_portrait_v1_64x64_pixelperfect_lora/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
[On Civitai](https://civitai.com/models/111793/pixel-portrait)
This is neat, and fun! I had used some pixel art models that didn't seem to be as good as this, or so I thought.
Says to use clip skip = 2, but that didn't seem to help me a lot.
```
a 1920s flapper <lora:pixel-portrait-v1:0.9>
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3471712642, Size: 512x512, Model hash: 47170319ea, Model: juggernaut_final
```

```
a 1990s raver <lora:pixel-portrait-v1:0.9>
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1557774298, Size: 512x512, Model hash: 47170319ea, Model: juggernaut_final
```

## Bloods and crits
### Dune Hype
[From Reddit](https://www.reddit.com/r/StableDiffusion/comments/14zg2fs/dune_hype_after_pascal_blanch%C3%A9/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
Usually I'd say that I don't like stuff that borrows directly from existing media -- but... I like Dune, what can I say?
And I respect fan artists, that's ok if it's your thing, it's just kind of not mine.
But it plays to a certain audience.
* Pretty good assymetrical balance
* Colors work really well together
* I'm not super sure about the yello stripe on the Bene Gesserit face

### Robber
[On Reddit](https://www.reddit.com/r/StableDiffusion/comments/154ikq6/robber/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
Rendering is really incredible, and looks good, overall.
I think there's some "weirdness" that needs to be resolved
* Backpack doesn't seem to be sitting on her back like a backpack
* The weapons are kinda... random
* You could either add more weapons, remove them
* The composition is OK, but it's centered
* The narrative is... there, but it's not pushed
Like, maybe it could go into a scene from here. Make it look like she's throwing a throwing star maybe.

### Third eye mind expansion
[On Reddit](https://www.reddit.com/r/StableDiffusion/comments/152yf68/third_eye_mind_expansion/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
I really really like the portrait! It's different, and it has good skin detail.
This really has a strong concept that takes away from GPS. It also pushes the narrative. I like the closed eyes + the open eye -- it has a sort of part-to-whole relationship.
The eye + earrings + forehead dot + necklace really add to the repetition of form, as well as the dreadlocks. The composition is good overall because of this.
I'm not totally sold on the "third eye" though. It seems like... it's not totally meshing with the rest of the image.
I'd probably inpaint it for a while and see what comes out.
There's a few detail things that need to be fixed -- one of which is the necklace doesn't go all the way around her head. Look for this stuff when you're working.

## Quick study
Looking at negative prompts.
They do matter.
I don't think the super wordy big ones make a huge deal, and overall, I'm happy with my current little brew for a negative prompt.
...I often have `(skinny:1.2)` in there because I don't really like super skinny model type looks (adds to GPS)
```
RAW photo, [Marilyn Monroe|Diane von Furstenberg|Linda Hamilton], color photography, 4k, analog style, film grain, magazine photo shoot, Fujicolor Superia X-tra 400 film, grain, high ISO, i can't believe how beautiful this is!!!!!!,
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1670781974, Size: 768x768, Model hash: 47170319ea, Model: juggernaut_final
```


## Technique of the week: Using a color pallete!
The idea is [from this reddit thread](https://www.reddit.com/r/StableDiffusion/comments/14zqqa6/is_it_possible_to_create_images_based_on_a_colour/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=1)
I wanted to try with t2ia color control net, but the only one I can find is from [tencent on civitai]() and it doesn't have safetensors files. So I decided its not worth it right now. [Olivio has a YT video on it, too](https://www.youtube.com/watch?v=JYGCDGNpmeU).
This example is from Midjourney -- just with the prompt `aesthetic color pallete` -- you can also google image search that if you care to.

Then I produced an image without color pallete... (no hires fix, or anything like that)
```
RAW, analog style, woman in a (cottagecore:1.1) interior, sharp focus, 8k UHD, high quality, film grain, Fujifilm XT3
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 912685328, Face restoration: CodeFormer, Size: 512x512, Model hash: 47170319ea, Model: juggernaut_final
```

Then I loaded the color pallete image from MJ into SD control net for a "control net shuffle"
It's not "bad" per se, but it does seem to have some other influence from the input image more than maybe we want.
```
RAW, analog style, woman in a (cottagecore:1.1) interior, sharp focus, 8k UHD, high quality, film grain, Fujifilm XT3
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3606452651, Face restoration: CodeFormer, Size: 512x512, Model hash: 47170319ea, Model: juggernaut_final, Denoising strength: 0.52, ControlNet 0: "preprocessor: shuffle, model: control_v11e_sd15_shuffle [526bfdae], weight: 1, starting/ending: (0, 1), resize mode: Crop and Resize, pixel perfect: False, control mode: Balanced, preprocessor params: (512, 64, 64)", Hires upscale: 1.5, Hires upscaler: Latent
```

And then I tried with "reference only" control net, also fairly effective.

Last but not least, I tried img2img with a high denoising ~0.7.
Maybe slightly less effective.
```
RAW, analog style, woman in a (cottagecore:1.1) interior, sharp focus, 8k UHD, high quality, film grain, Fujifilm XT3
Negative prompt: (bad_prompt_v2:0.8),Asian-Less-Neg,bad-hands-5, BadDream, (skinny:1.2)
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 6, Seed: 2522625461, Face restoration: CodeFormer, Size: 768x768, Model hash: 47170319ea, Model: juggernaut_final, Denoising strength: 0.7
```
