# This is not an AI Art Podcast (Ep. 4)

## Intro
Welcome to episode four! This is your host, Doug Smith. This is Not An AI art podcast is a podcast about, well, AI ART – technology, community, and techniques. With a focus on stable diffusion, but all art tools are up for grabs, from the pencil on up, and including pay-to-play tools, like Midjourney. Less philosophy – more tire kicking. But if the philosophy gets in the way, we'll cover it.
But plenty of art theory!
Today we've got:
* Model madness model reviews: Analog Madness v4 is OUT, Protogen 5.8, Film grain LoRA, and Perpetual diffusion a 2.1 based model
* "Bloods and crits": Art critique on 3 pieces
* Technique of the week: "Zooming in", quick traditional technique
* My project update: so you can learn from my process
Available on:
* [Spotify](https://open.spotify.com/show/4RxBUvcx71dnOr1e1oYmvV)
* [iHeartRadio](https://www.iheart.com/podcast/269-this-is-not-an-ai-art-podc-112887791/)
* [Google Podcasts](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9kZWY2YmQwOC9wb2RjYXN0L3Jzcw)
Show notes are always included and include all the visuals, prompts and technique examples, the format is intended to be so that you don't have to be looking at your screen -- but the show notes have all the imagery and prompts and details on the processes we look at.
## News update
Looks like automatic1111 is getting an update!
...They've been working on a dev branch.
I've been using vlad: https://github.com/vladmandic/automatic
Having good luck with it, I think I have a better performance out of the box, and no tuning (which I did try with automatic1111)
Great [quote from Reddit](https://www.reddit.com/r/StableDiffusion/comments/133rxgu/are_giant_word_vomit_prompts_really_necessary_to/jib8imw?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button) from /u/The_Lovely_Blue_Faux:
> Being better at prompting gives you better batches. Break out that thesaurus and Google what kind of architecture that Byzantine city had. Look up the name of that ancient ritualistic garment you have in mind.
In the context of "4 paragraph word vomit prompts are garbage"
## Model Madness
### Analog Madness v4 is OUT
https://civitai.com/models/8030/analog-madness-realistic-model
This is my favorite model of the moment, and version 4 is showing awesome results.
You won't believe the words I had to delete from the example prompt from Civitai -- you wouldn't want your mother to see that.
```
1920s flapper, color photograph, highly detailed, sharp focus, 4k, analog style, ultra sharp image
Negative prompt: bad_prompt_version2:0.8
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 768x768, Model hash: 0b914c246e, Model: analogMadness_v40, VAE: vae-ft-mse-840000-ema-pruned
```


## Protogen 5.8
Not new, but, everyone seems to love it.
I had seen some photographic ones come out of it, so I wanted to try it. I see why people like it. Clothes seem to be coming out great, maybe it's just me.
https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release#trigger-words
I used a modified "magic prompt" for this one:
https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion
```
1920s flapper, cool vintage clothes, elegant, highly detailed, centered, digital painting, artstation, concept art, smooth, sharp focus
Negative prompt: bad_prompt_version2:0.8
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 768x768, Model hash: 847da9eead, Model: ProtoGen_X5.8-pruned-fp16, VAE: vae-ft-mse-840000-ema-pruned
```

And then photographic... Not as impressed. But I don't think it's the main use case.
```
1920s flapper, cool vintage clothes, elegant, highly detailed, sharp focus, 8k UHD, DSLR, high quality, film grain, Fujifilm XT3, analog style
Negative prompt: bad_prompt_version2:0.8
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 768x768, Model hash: 847da9eead, Model: ProtoGen_X5.8-pruned-fp16, VAE: vae-ft-mse-840000-ema-pruned
```

### Film Grain LoRA
I think it's OK, it looks good. I'm wondering if it has an impact on the look of the subject, because it's not as "flapper-ish" so... beware that it could introduce GPS, so use it wisely.
Actually if you lower the intensity, it helps.
But I'm still seeing some saturation problems to an extent. It might still need work.
https://civitai.com/models/33208/filmgirl-film-grain-lora
```
1920s flapper in a nightclub, elegant, highly detailed, sharp focus, 8k UHD, DSLR, high quality, film grain, Fujifilm XT3, analog style <lora:FilmG2:1>
Negative prompt: bad_prompt_version2:0.8
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 768x768, Model hash: 0b914c246e, Model: analogMadness_v40, VAE: vae-ft-mse-840000-ema-pruned
```


And with Lora @ `<lora:FilmG2:1>` -- it's an improvement.

### Perpetual diffusion
https://civitai.com/models/44412/perpetual-diffusion-10
Pretty recent from April 18th. It's a 2.1 based model. Which I'm not using a lot of right now, I'm sure we'll see more and more based on 2.1.
Results seem to be good, I took a "graphic design" prompt from the examples and modified it.
```
1920s flapper in a nightclub, rendered in cinema4d, graphic design poster art, close - up, michael komarck, streamline elegance, sven nordqvist
Negative prompt: bad_prompt_version2:0.8
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Face restoration: CodeFormer, Size: 768x768, Model hash: 2d05a41142, Model: perpetualDiffusion10_v10Moon
```


### Classipeint Embedding
https://civitai.com/models/3768/classipeint
Wouldn't work for me, but I want to give it a try.
### Resources
Dynamic prompts are awesome! Try them out, [mention on Reddit](https://www.reddit.com/r/StableDiffusion/comments/131qvki/dynamic_prompt_is_amazing/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button).
## Bloods & Crits
Why do we critique? Because we need to self critique. We need to critique what comes out of our generations. So we can iterate.
### Trash > Architecture Visuals using control net, on [reddit](https://www.reddit.com/r/StableDiffusion/comments/131qa5t/trash_architecture_visuals_using_control_net/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
Incredible idea. I once had a professor say "nice use of local materials" when someone came in with a painting on a pizza box. Hey, they did their homework.
The idea is GREAT. I love this progression here, we start with trash and we wind up with treasure.
Using the same idea, I'd probably try to start with a more interesting composition on the original object that's used as the control net. While these are cool and interesting, they look half way complete individually with the gradient background. But as a series it does look more complete.
Kind of a common mistake is whole subject in frame. You can do that, but you need to mitigate for it.

### Cute downtown shopping across the world, [on reddit](https://www.reddit.com/r/SDLandscapes/comments/130vlg8/cute_downtown_shopping_around_the_world_and_then/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
Few things that really stand out:
* It connected with me as the audience. I like landscapes, and there's a mention of New Hampshire
* The artist also connected with me via comment!
* I really like that the artist has a formula that's flexible that allows a lot of interesting variations -- look for those in your own work.
I don't have any particular crits on the individual pieces. I think there's a lot here for narrative, and the artist clearly has a workflow that's working VERY VERY well for generating beautiful landscapes.
They even have an embedding they trained on their own art!
Shout out to the https://www.reddit.com/r/SDLandscapes/ sub.

https://stable-diffusion-art.com/samplers/#Ancestral_samplers
### Famous buildings with towels, [on reddit](https://www.reddit.com/r/StableDiffusion/comments/133mrav/famous_buildings_reenacted_with_towels/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button)
If this were in a museum as an actual installation, I would be all over it. I just think it's a fun way to play with a common item -- like, sculpturally.
I like that it plays with a sculptural theme.
Visually these could use some work in terms of composition. I get that it shows the entire subject, and I think that's effective to a point. But I think playing with how this is put together on the frame could create more movement of the eye, and also enhance how 3d it looks.
With sculpture we want it to look so good that we want to see it from another angle. That's a sign of success.
The concept is fun and takes it to another level. There's narrative, but maybe there could be more.

## Traditional technique: Draw with your eraser
Instead of trying to be perfect -- use your eraser.
Do this digitally too. Sometimes I work on a new layer and I get loose. I later then use marquee select and delete stuff. Or use an eraser tool.
Use this to GO DARKER when you're drawing physically. And use your eraser to build out areas of contrast. Major potential drawing problem is not going dark enough, then you don't get enough contrast.
For an example myself, here's a < 10 minute sketch (maybe more with processing it for the blog hah!)
### Here's a layout before erasing
I can stay nice and loose and get my proportions (which still have problems) and pose without worrying about "making a mess", I still have another phase of drawing...

### And after erasing...
Note how I make some of the decisions for the outer contour, and remove proportion and layout lines.
I also doubled up my layers to get more contrast.

### And then we use it as a scribble control net
(I overpainted/inpainted the hands and touched up the face)

## Technique of the week: Zooming in!
This is a study I'm not sure I'm done with it, but I'm trying to figure out the process.
So first I took an old postcard.

I upscaled it with gigapixel.
Then I used it with control net and img2img and kinda had a start from there.

Then I took squares out of it, selected those and left a mask in my image editor so I could put stuff back later.
I would take those, and then generate some assets to start with generally on an image, like...

Then I would inpaint portions of it until I got what I was looking for, which was a kind of Norman Rockwell ideals 1940's kind of a thing. So the "final" looked like:

And then I would take those and scale them down and put it back on the main image.

## Project update
Really inspired by this gallery showing, I need to go see it:
https://www.sevendaysvt.com/vermont/ghosts-civil-war-portraits-by-william-betcher-revisit-the-dead/Content?oid=37943409
I love that the artist cherishes that these photos were very unique and very special, and notes how that's something lost in this digital age. Hits the feels.

Did my own "sorta cheese" inspired piece. Might be technique of the week next week.

Been doing some physical hardware upgrades. Building out another machine for a decent GPU and going to make it a linux box for some more automated stuff. Been stable-diffusioning in wind0ze because of art tools.
Have started to set a TOO HIGH of a bar for myself with my project. And now I'm working very very hard to produce stuff. I need to lower expectations and learn from my audience. Sometimes my easy-to-make stuff does really well. Granted, sometimes the ideas aren't easy to come by.