# Photoshop's AI
Photoshop's new AI tool introduces additional capabilities for content generation within images. This tool can automatically create new elements and fill in missing parts of an image. However, it has certain limitations, particularly when it comes to creating realistic people or objects with organic shapes like cars. Here's an objective overview of the tool and its limitations:
- Content Creation: The AI tool can analyze existing image content and generate new content based on that analysis. It can fill in gaps, remove unwanted elements, or generate entirely new objects (it's better then SD at making something up, from nothing). It's most effective in scenarios where the content is well-defined and exhibits clear patterns or structures. It can seamlessly remove objects, extend backgrounds, or fill in gaps.
- Limitations: Creating realistic people with the AI tool is nigh impossible. It struggles to generate accurate human features. For more better looking people in images, just use SD.
- Limitations with Organic Shapes: Objects with organic shapes, such as cars, can also pose challenges for the AI tool. Achieving precise proportions, and shapes may be difficult, intrestingly it will still make nice textures and details.
- Workflow Optimization: To optimize your workflow, consider utilizing the AI tool for non-organic elements or background enhancements. It can be used for very small parts of humans SD makes to fix things like hands
- Experimentation and Iteration: While the AI tool may not consistently achieve desired results with people or organic shapes, experimentation and iteration are key. Adjusting tool parameters, combining it with manual edits, or applying artistic filters can help refine the final outcome.
AI

Render

# Stable diffusion GC
## Check for updates
To ensure you have the latest features and improvements in Stable Diffusion, it's important to regularly check for updates. Follow these steps to check for updates and apply them:
- Open Stable Diffusion.
- Go to the "Extensions" menu.
- Select "Check for Updates".
- If updates are available, click on "Apply and Restart UI" to install the updates.
- After the UI restarts, you'll have the latest version of Stable Diffusion with all the new enhancements.
Keeping your tool updated will ensure you have access to the most recent control net and user interface improvements, improving your experience with Stable Diffusion.
## Checkpoint Models
Stable Diffusion provides a variety of checkpoint models that you can use to enhance your image generation. It's important to note that while these models have been tested and found to work well, their performance and suitability may vary depending on the specific use case. Additionally, new checkpoint models may become available over time, so it's always worth exploring and trying different models to find the ones that best align with your preferences and requirements.
Here are some recommended checkpoint models along with their unique features:
- [Deliberate](https://civitai.com/models/4823/deliberate): This model is known for its versatility and overall good performance across various types of images.
- [Imperfect Faces](https://civitai.com/models/16804): If your goal is to generate realistic faces, this model is a great choice, as it specializes in producing high-quality faces that closely resemble real people.
- [High Contrast Cinematic](https://civitai.com/models/15022/526mix-v145): This model excels in creating images with high contrast and cinematic qualities, making it ideal for dramatic and impactful visuals.
Remember that different parts of the same object or image may benefit from using different checkpoint models. you should experiment with the available models to find the ones that produce the best results for your specific needs.
## img2img/png info
- In Stable Diffusion, the primary focus of GC is the img2img process. This process involves transforming an input image into an output image using the AI checkpoint model.
- Stable Diffusion allows you to save settings in the PNG images. These PNGs contain information about the specific settings used during the generation process. By saving these PNGs you can save your settings to easily refer back to them for future reference or share them with others.
## prompts
When using prompts in Stable Diffusion, keep the following tips in mind:
- avoid using spaces between words to improve results. For example, instead of "word1, word2, " use "word1,word2,".
- Utilize multiple different words for the same concept to help the AI generate desired results.
### Example base positive prompt:
`{{{{{{{{hyperrealistic}}}}}}}},{{{Photography}}},{{masterpiece}},perfect anatomy,intricate,(highly detailed),photography,vibrant,perfect anatomy,caustics,textile shading,super detailed,{{{best quality}}},{{ultra-detailed}},{illustration},`
+ Use something like the prompt above, plus a simple prompt, the more basic the better for people
### Negative Prompts
Negative prompts in Stable Diffusion serve as the best way to guide the AI model to avoid specific elements or generate images in a desired manner. Negative prompts can help steer the AI's behavior by providing instructions on what to avoid or discourage during the image generation process. By incorporating negative prompts, you have more control over the output and can influence the AI's decisions. this section should be the most detailed coparalativly to the posative prompt.
Here are some textual inversion that provide negative prompts:
- [EasyNegative](https://civitai.com/models/7808/easynegative): EasyNegative offers a collection of negative prompts that can be used to guide the AI model. These prompts include instructions to avoid certain characteristics or styles in the generated images.
- [Negative Embedding for Deliberate](https://civitai.com/models/30224/negative-embedding-for-deliberate): This resource provides negative prompts specifically designed for use with the Deliberate model. It includes prompts that help steer the AI away from specific attributes or outcomes.
- [Negative Embedding for Realistic Vision V20](https://civitai.com/models/36070/negative-embedding-for-realistic-vision-v20): This resource offers negative prompts tailored for use with the Realistic Vision V20 model. These prompts assist in guiding the AI to generate images with specific qualities or avoid certain undesirable elements.
When using negative prompts, you can experiment with different combinations and variations to achieve the desired results. These prompts can help you refine the AI's output and create images that align more closely with your vision.
Note that textual inversion, as you mentioned, refers to a technique where a trained model is used to guide the AI's behavior. While Stable Diffusion may not have a specific model for textual inversion, you can incorporate negative prompts to guide the AI and achieve similar effects.
`EasyNegative,multiple angle,monochrome,black and white,blurry,longbody,lowres,bad anatomy,bad hands,missing fingers,extra digit,fewer digits,cropped,worst quality,low quality,text,error,fewer digits,cropped,worst quality,low quality,normal quality,jpeg artefacts,signature,watermark,username,blurry,missing fingers,bad hands,missing arms,head_out_of_frame,missing legs,bat legs,nude,boobs,text,logo,words,title,PNG,spirals,blurry`
## impaint
Inpainting allows you to fill in masked areas in your images. Follow these guidelines when using inpainting in Stable Diffusion GC:
- Make sure to select only the masked area.
- In Photoshop, use the marquee tool to select and copy a merged layer. Paste the selection in Stable Diffusion GC, generate the image, and paste it back over the marquee in Photoshop. This creates an efficient workflow.
- For faces, use low denoising values between 0.1 and 0.3.
- For whole people or objects, use denoising values between 0.4 and 0.7.
- Avoid using a masked area larger than your resolution, as it may result in lower quality output
## Sampling and sampler
Sampling and the choice of sampler play a significant role in the output. Here are some recommendations:
- Euler A is an excellent default sampler. Avoid lowering the number of samples below 20.
- DPM++ 2M Karras is useful for capturing fine details and achieving more photorealistic results. However, it requires around 30 samples and may significantly slow down the AI. Only use it if you need to enhance details in a specific seed image.
## Restore Faces and Tiling
In Stable Diffusion, there are two important settings related to image generation: "Restore Faces" and "Tiling." Understanding how these settings work can greatly impact the quality and outcome of your generated images.
### Restore Faces
The "Restore Faces" option is a setting that can be enabled or disabled. When enabled, Stable Diffusion attempts to restore any missing or distorted faces in an image. This can be particularly useful when working with images that have incomplete or damaged facial features.
It's important to note that if you have "Restore Faces" enabled and you're not specifically working with an image that contains faces, the AI might attempt to generate faces randomly within the image. This could result in unexpected or unintended outcomes. Therefore, it's recommended to enable "Restore Faces" only when working with images that actually feature faces to ensure the best results.
### Tiling
The "Tiling" option is used to create repeating patterns or textures by making the image tile. This can be useful for generating textures or fabrics.
However, it's worth mentioning that achieving good results with tiling can be more challenging compared to other techniques. Getting good textures using tiling often requires careful experimentation and fine-tuning of the parameters. It's recommended to practice and iterate to achieve the desired outcome when working with tiling.
# settings:
## size
You can utilize the protractor button to automatically set the size of the output image based on the dimensions of the copied image. This feature conveniently ensures that the generated image maintains the same size as the source image.
It's important to note that while using the protractor button, the sum of the dimensions (width + height) should not exceed approximately 1200 pixels or it may crash.
Additionally, it is advisable to avoid setting the size below 512 pixels, as doing so may significantly compromise the image quality. If the copied image is below 512 pixels, it is recommended to increase the size to maintain a satisfactory level of quality in the generated output.
## batch count
The Batch Count setting in Stable Diffusion controls the number of images generated simultaneously during the diffusion process. It can provide different options and variations but takes longer then risking just one gen.
- Setting the Batch Count to 1 generates a single image at a time. This option is suitable when you prefer a more focused approach with minimal noise and want to quickly generate specific images.
- Increasing the Batch Count to 3 or 6 allows you to generate multiple images simultaneously. This provides you with a broader range of options and variations for your images. It can be useful for faces when using high noise.
## CFG scale
CFG Scale, also known as Classifier-Free Guidance, is a parameter in Stable Diffusion that allows you to control the level of guidance provided to the AI model during the image generation process. It determines the strength of the influence of prompts and prompts-based guidance on the AI's decision-making.
- A good starting point for CFG Scale is typically 5. This value provides moderate guidance to the AI model without overwhelming its decision-making process.
- If you find that the AI is not adequately responding to your prompts or that it is not capturing the desired details, you can try increasing the CFG Scale slightly. This can help the AI pay more attention to the provided prompts and produce more aligned results.
- The CFG Scale setting in Stable Diffusion typically does not require adjustment and can remain at the default value of 5 for most cases. Changing this setting without proper understanding may result in unintended consequences, such as breaking the AI's behavior and making you angry.
## Denoising
The denoising setting in Stable Diffusion controls the level of noise introduced into the image generation process. It has an impact on how the AI model utilizes the source image and the degree to which it generates new content.
When the denoising setting is low (e.g., 0.1-0.3), the AI model relies more on the details present in the source image. In this case, the AI will attempt to preserve and enhance the existing elements, resulting in images that closely resemble the source.
Conversely, when the denoising setting is higher (e.g., 0.5 or above), the AI model introduces more random noise into the image generation process. This increased noise provides the AI with more freedom to create new content, allowing for greater creativity and the generation of more novel and imaginative results.
It's important to note that finding the right balance for the denoising setting depends on the specific image and desired outcome. If you find that the generated images at low denoising settings lack the desired level of creativity or are too similar to the source, you can try increasing the denoising setting to encourage the AI to generate more original content.
## Loras
Loras and models you can download and give weight to. there aren't many useful ones for us, but the one below can be handy for increasing or decreasing the amount of detail in an image
https://civitai.com/models/58390/detail-tweaker-lora-lora
## Control nets
Control nets are a tool that allows you to have more control over the AI's image generation process. They make use of various preprocessors and models to create a noise pattern that influences the final output image. With control nets, you can shape the AI's behavior and guide it to produce images with specific characteristics.
One commonly used control net in Stable Diffusion is the OpenPose model. This model focuses on extracting the pose information from the input image while discarding details. This helps in capturing the pose structure while ignoring other elements.
Here's an example to illustrate the impact of control nets:

It's important to note that Stable Diffusion offers a range of control net configurations and models for you to try. These configurations can have a significant impact on the final output, allowing you to customize the artistic style, level of detail, and other image attributes.
For a more in-depth guide on control nets in Stable Diffusion, you can refer to the comprehensive documentation available here: https://stable-diffusion-art.com/controlnet/. This guide provides detailed insights into the different preprocessors, models, and techniques used in control nets.
# other
## upscaling
Stable Diffusion GC also provides an upscaling feature. You can upscale the final image using the SD upscale script available at the bottom of the page. This may come in handy for rare cases when upscaling is required.
Keep in mind that once the script is on, your image size will now be the bucket size.