--K-DIFFUSION RETARDEDER GUIDE (GUI)--

NOW WITH GFPGAN!

The definitive Stable Diffusion experience ™
(Windows)

What does this add?

Gradio GUI: A retard-proof, fully featured frontend for both txt2img and img2img generation
No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
GFPGAN Face Correction (NEW): Automatically correct distorted faces with a built-in GFPGAN option, fixes them in less than half a second
K-sampling: Far greater quality outputs than the default sampler, less distortion and more accurate
Easy Img2Img: Drag and drop img2img with built-in cropping tool
CFG: Classifier free guidance scale, a previously unavailable feature for fine-tuning your output
Lighter on Vram: 512x512 img2img & txt2img tested working on 6gb
Randomized seed: No more getting the same results, seed is randomized by default
and more!

Guide

Step 1: Download the NEW 1.4 model from huggingface or HERE
Torrent magnet: https://rentry.org/sdiffusionmagnet

Step 2: Git clone the repo from https://github.com/hlky/stable-diffusion/ and extract.
If you don't have git install gitforwindows.
Use git bash after install to run git clone https://github.com/hlky/stable-diffusion/
You can right click in any folder to open git bash in the current folder

Step 3: Move and rename downloaded model to stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt

Step 4: Download Miniconda HERE. Download Miniconda 3

Step 5: Install Miniconda. Install for all users. Uncheck "Register Miniconda as the system Python 3.9" unless you want to

Step 6: Double click the Setup Waifu Diffusion shortcut in the stable-diffusion to open conda
Go to the stable-diffusion/ folder using "cd" to jump folders.
(Or just type "cd" followed by a space and then drag the stable-diffusion/ folder into the Anaconda prompt.)

Step 7: Run the following command: "setup.cmd" and wait

ALTERNATIVE for step 6 onwards:
Step 6: Open Anaconda Prompt (miniconda3).
Go to the stable-diffusion/ folder using "cd" to jump folders.
(Or just type "cd" followed by a space and then drag the stable-diffusion/ folder into the Anaconda prompt.)

Step 7: Run the following command: "conda env create -f environment.yaml" and wait
(Make sure you are in the stable-diffusion folder)

Step 8: Run the following command: "conda activate ldm"
(You will need to type this each time you open Miniconda before running scripts!)

OPTIONAL: If you want GFPGAN support
Download the GFPGAN pre-trained model and place it in src/gfpgan/experiments/pretrained_models/

Setup Complete

--USAGE--
Double click Launch Waifu Diffusion shortcut

ALTERNATE USAGE:

  • Open Miniconda and navigate to stable-diffusion
  • Type "conda activate ldm"
  • Type "python scripts/webui.py" and wait while it loads into ram and vram
  • After finishing, it should give you a LAN ip with a port such as '127.0.0.1:7860'
  • Open your browser and enter the address
  • You should now be in an interface with a txt2img and img2img tab
  • Have fun

NEWEST SCRIPT UPDATES AVAILABLE HERE: https://github.com/hlky/stable-diffusion-webui
^(warning: bleeding edge, may have bugs, new features may be pushed here before being synced with main repo- use repo script if concerned about stability)^

Special thanks to all anons who contributed

--LINKS/NOTES/TIPS--

  • Build great aesthetic prompts using the prompt builder
  • Check out the wiki https://wiki.installgentoo.com/wiki/Stable_Diffusion
  • A fantastic simple tool for upscaling your outputs is cupscale: https://github.com/n00mkrad/cupscale
  • original webgui.py repo credit
  • If you are getting "prefix already exists: ldm", run "conda env remove -n ldm", then run environment.yaml again
  • The seed for each generated result is in the output filename if you want to revisit it
  • (Fixed) If your generations are unusually slow, disable hardware acceleration in the browser that is running webgui
  • If your output is a jumbled rainbow mess your image resolution is set TOO LOW
  • (loopback implemented) Feeding outputs back in using the same prompt with a weak strength multiple times can produce great results
  • Using the same keywords as a generated image in img2img produces interesting variants
  • The more keywords, the better. Look up guides for prompt tagging
  • It's recommended to have your prompts be at least 512 pixels in one dimension, or a 384x384 square at the smallest
    Anything smaller will have heavy artifacting
  • 512x512 will always yield the most accurate results as the model was trained at that resolution
  • Try Low strength (0.3-0.4) + High CFG in img2img for interesting outputs
  • You can use Japanese Unicode characters in prompts
  • This guide is designed for NVIDIA GPUs only, as stable diffusion requires cuda cores.
    AMD users should try https://rentry.org/tqizb
  • Line 202 of webgui.py will result in an error on linux.
    Either use the default font, which will throw an error if your prompt contains Japanese
    fnt = ImageFont.load_default()
    Or link directly to a font.
    fnt = ImageFont.truetype("/usr/share/fonts/noto-cjk/NotoSansCJK-Medium.ttc", fontsize)
  • You can prune a v1.3 weight model using "python scripts/prune.py" in waifu-diffusion-main
    Pruning shrinks the file size to 2gb instead of 7. Output remains largely equivalent
    Comparison- https://i.postimg.cc/ZRKz4tJv/textprune.png
  • (Prune.py does not work on the new model, and does not matter as v1.4 is less heavy than v1.3 )
  • If your output is solid green, the half precision optimization may not be working for you:
  • GREEN SCREEN FIX:
  • Run webui.py with the following parameters:
    "python scripts/webui.py --precision full --no-half"
    (Note: this will raise vram usage drastically), you may have to reduce resolution

--OLD MODEL--
The original v1.3 leaked model from July can be downloaded here:
https://drinkordiecdn.lol/sd-v1-3-full-ema.ckpt
Backup Download: https://download1980.mediafire.com/3nu6nlhy92ag/wnlyj8vikn2kpzn/sd-v1-3-full-ema.ckpt
Torrent Magnet: https://rentry.co/6gocs

--CHANGELOG--
8/22: renamed "gradio.py" to "kdiff.py"- previous name conflicted with Gradio package causing an AttributeError.
If you are having issues, please rename it

  • added fix to green screen of death
  • added official v1.4 model links

8/23: Installation process now simplified vastly using new environment.yaml, original guide available at https://rentry.org/kretardold if problems arise (unlikely)

  • Upgraded with GFPGAN support!
  • Previous non-GFPGAN guide available here: https://rentry.org/kretardnogf
  • renamed "kdiff.py" to "webgui.py" (new script)
  • changed "ldw" to "ldx" to prevent accidental overwriting of environments

8/24: New script added. Features:
-New image resizing options built-in
-Hide Gradio progress bar to save on GPU usage
-Prompt verification (to see if it's too long)
-Prompt matrix from the txt2img portion added to img2img
-General refactoring
(8/24) Script updated readme

8/25: Instructions updated

Edit
Pub: 25 Aug 2022 11:16 UTC
Edit: 25 Aug 2022 11:28 UTC
Views: 1181