(9/08/2022) UPDATE TO THE LATEST VERSION TO USE EVEN LESS VRAM
(9/03/2022) UPDATED TO INCLUDE INPAINTING
(8/27/2022) UPDATED TO INCLUDE GRADIO GUI, IMG2IMG AND PROMPT WEIGHTING
TABLE OF CONTENTS:
- (Gradio GUI Version) Local Install of Stable Diffusion for Windows
- (Non-GUI Version) Local Install of Stable Diffusion for Windows
- Img2img Usage Guide
- Inpainting Usage Guide
- Prompt Weighting
- Prompt Modifiers
- Common Errors/Tips
**NOTE: If downloading a new update of basujindal, simply overwrite all files except for environment.yaml in your stable-diffusion-main folder!
1. (Gradio GUI Version) Local Install of Stable Diffusion for Windows
- Visit https://huggingface.co/CompVis/stable-diffusion-v-1-4-original, scroll down and select "Authorize", make sure you make an account first
- Download the checkpoint: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt
- Download Stable Diffusion: https://github.com/basujindal/stable-diffusion/archive/refs/heads/main.zip
- Unzip stable-diffusion-main.zip file to your preferred location and go to the stable-diffusion-main/models/ldm folder and make a new folder inside called stable-diffusion-v1 then rename sd-v1-4.ckpt that you downloaded to model.ckpt and move the file into this folder
- Go back to the start of the stable-diffusion-main folder and open environment.yaml using Notepad and scroll down to dependencies: and add the line - git so it looks like:
dependencies:
- git
- python=3.8.5
- pip=20.3 - Download Miniconda from here: https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe and install it
- Open Anaconda Prompt (miniconda3) and type cd path to stable-diffusion-main folder, so if you have it saved in Documents you would type cd Documents/stable-diffusion-main
- Run the command conda env create -f environment.yaml (you only need to do this step for the first time, otherwise skip it)
- Run conda activate ldm, pip install gradio (first time only or when updates are required) and then python optimizedSD/txt2img_gradio.py
- Enter the IP address shown in the command window (it will start with 127.0.0.1) into your address bar in your web browser and there is your GUI to create images!
2. (Non-GUI Version) Local Install of Stable Diffusion for Windows
- Visit https://huggingface.co/ and create an account
- Visit https://huggingface.co/CompVis/stable-diffusion-v-1-4-original, scroll down and select "Authorize"
- Download the checkpoint: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt
- Download Stable Diffusion: https://github.com/basujindal/stable-diffusion/archive/refs/heads/main.zip
- Unzip stable-diffusion-main.zip file to your preferred location and go to the stable-diffusion-main/models/ldm folder and make a new folder inside called stable-diffusion-v1
- Rename the downloaded sd-v1-4.ckpt to model.ckpt and move the file into the stable-diffusion-v1 folder
- Go back to the start of the stable-diffusion-main folder and open environment.yaml using Notepad
- Scroll down to dependencies: and add the line - git so it looks like:
dependencies:
- git
- python=3.8.5
- pip=20.3 - Download Miniconda from here: https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe
- Run Miniconda3-latest-Windows-x86_64.exe and install it
- Open Anaconda Prompt (miniconda3)
- Type cd path to stable-diffusion-main folder, so if you have it saved in Documents you would type cd Documents/stable-diffusion-main
- Run the command conda env create -f environment.yaml (you only need to do this step for the first time, otherwise skip it)
- Wait for it to process
- Run conda activate ldm
- Now you may create prompts using python scripts/txt2img.py --prompt "insert prompt"!
NOTE: If you are receiving CUDA out of memory errors, use python optimizedSD/optimized_txt2img.py instead of scripts/txt2img.py!
- Your images are saved to stable-diffusion-main/outputs/txt2img-samples/<prompt name> by default, you may change it by using --outdir directory_name to change it
- 3 images are created by default and 5 for optimizedSD. If you would like less, use --n_samples x
3. Img2img Usage Guide
- Complete setup for (Gradio GUI Version) Local Install of Stable Diffusion for Windows above
- Open Anaconda Prompt (miniconda3) and type cd path to stable-diffusion-main folder, so if you have it saved in Documents you would type cd Documents/stable-diffusion-main
- Run conda activate ldm and then python optimizedSD/img2img_gradio.py
- Enter the IP address shown in the command window (it will start with 127.0.0.1) into your address bar in your web browser
- Select an image to upload and then enter your details on the page for generation!
4. Inpainting Usage Guide
- Complete setup for (Gradio GUI Version) Local Install of Stable Diffusion for Windows above
- Open Anaconda Prompt (miniconda3) and type cd path to stable-diffusion-main folder, so if you have it saved in Documents you would type cd Documents/stable-diffusion-main
- Run conda activate ldm and then python optimizedSD/inpaint_gradio.py
- Enter the IP address shown in the command window (it will start with 127.0.0.1) into your address bar in your web browser
- Select an image to upload to start using with inpainting
- Draw wherever on your image that you wish to have altered to a new variation!
5. Prompt Weighting
While using either Gradio GUI or manual prompting, you may use prompt weighting to shift towards certain modifiers inside of your prompt.
For example, if you wish to have prompt to generate broken-down car, rusted, red paint you can then instead do broken-down car, rusted:0.25 red paint:0.75 which would increase the emphasis on the broken-down car having more red paint visible in the image with less rusted in the requested prompt.
Another example can be with the prompt chicken:0.75 snake:0.25 mixed animal which would increase the emphasis on the prompt looking more like a chicken and lesser a snake.
6. Prompt Modifiers
txt2img:
--prompt - The main and first one that you use to generate images with
--outdir - Specify the folder you wish to have your images saved to
--skip_grid - Saves the output as individual images instead of a grid
--ddim_steps - Specifies the amount of processing steps used. The higher the number the more times it'll work on rendering it. Higher steps DOES NOT mean a beter image. Every 50 steps multiplies processing time by 1 (Default: 50)
--plms - Use PLMS sampling
--laion400m - Use the LAION400M model during creation
--n_samples - How many images should be created in one go (Default: 3, 5 for optimizedSD)
--n_iters - How many times the amount of images under --n_samples should run
--H - Specify the image height, multiples of 64. Warning: Higher values drastically increase compute and VRAM usage (Default: 512)
--W - Specify the image width, multiples of 64. Warning: Higher values drastically increase compute and VRAM usage (Default: 512)
--C - Latent channels used (Default: 4)
--scale - How close an image should match the prompt given. Lower numbers stray further away from the prompt and higher numbers try to be more accurate. Recommended to stay at default or up to 15-20 (Default: 7.5)
--seed - Seed used during image generation
--precision - [full, autocast]
img2img/inpainting:
prompt - Description on what you want the new image to be based on
strength - How close the prompt should affect the image (0.0 is basically the input image, 10.0 is basically ignoring the input image, 5.0 is a middle-ground)
7. Common Errors/Tips
- If you're having troubles creating a larger image try using the turbo mode at the bottom
- If you're getting a green screen from outputs set precision to full, or if you're using the text-based version then type --precision full
- If you are running out of memory and you have a sufficient GPU, use --n_samples 1 to render only one image per batch as well as keep the standard 512 width/height
- If you need to recreate your ldm environment or are having problems with it due to a previous installation, run conda remove --name ldm --all
Author: Kevi
This is not an official guide by Stable Diffusion/Stability.AI