Try   HackMD

How to train a new person in Stable Diffusion using Dreambooth

Assumes you have a runpod (https://www.runpod.io/) account with credits

  • Choose Secure Cloud and 1x RTX A5000 machine (something with > 24GB RAM)
  • Select template "Run Pod PyTorch"
  • Container Disk and Volume Disk = 40GB
  • "start jupiter notebook" ticked
  • Click Continue
  • Click on My Pods
  • Click on connect
  • Click on connect to jupiter lab
  • click on Python3 (notebook)
  • Paste this !git clone https://github.com/JoePenna/Dreambooth-Stable-Diffusion.git into the console area and press the play icon
  • A new folder Dreambooth-Stable-Diffusion should appear in the right window. You may need to press the refresh symbol. Double click this folder to enter.
  • Now double click on the file dreambooth_runpod_joepenna.ipynb. This should open a new tab (you can delete the other one if you wish)
  • In Build Environment you can now choose the second box and press play to install a bunch of python dependencies as we have already done the first one. You know when a cell has completed as the * turns into a number.
  • Before we run the next cell we need to create a hugging face token. Visit https://huggingface.co and create an account (its free) if you don't have one already.
  • Logged into your new account go to https://huggingface.co/settings/tokens
  • create a new token with write access
  • You will also need to accept the t&c's for the Stable Diffusion version you are about to download so visit https://huggingface.co/CompVis/stable-diffusion-v-1-4-origin
  • Back in runpod you can now run the cell and enter the new token into the box and press play
  • You should now be able to run the next cell which downloads the 1.4 stable diffusion model
  • Next we need some Regularization Images. I used the pre-generated ones so used that cell.
  • Now we upload our training images. I created a folder called training_images and dragged my images into that folder rather than provide urls. I pre-sized them using a local running version SD and using the train/precreateimages tab. They need to be .pngs and 512x512.
  • You are now ready to train. You need to fill in project as a unique name for your project, the important bits are class_word and token. For the purpose of training a mayself I used person and duncan_robertson.
  • The final part is max_training_steps
  • For the max_training_steps I used the amount of training images I had uploaded * 100
  • You can now run this cell. It takes about 1 hour
  • Once complete you can then run the next cell which moves the last checkpoint file into the trained_models folder.
  • You can now download this folder as this is the newly trained model you can use in your local Stable Diffusion setup.