--K-DIFFUSION RETARD GUIDE (GUI)--

(8/22) New 1.4 AI model released! Tested fully functioning, no adjustments needed!
The definitive Stable Diffusion experience ™
(Windows)
Special thanks to all anons who contributed

What does this add?

Gradio GUI: A retard-proof, fully featured frontend for both txt2img and img2img generation
No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
K-sampling: Far greater quality outputs than the default sampler, less distortion and more accurate
Easy Img2Img: Drag and drop img2img with built-in cropping tool
CFG: Classifier free guidance scale, a previously unavailable feature for fine-tuning your output
Lighter on Vram: 512x512 img2img & txt2img tested working on 6gb
Randomized seed: No more getting the same results, seed is randomized by default

Guide

Step 1: Download the NEW 1.4 model from HERE
Torrent magnet: https://rentry.org/sdiffusionmagnet

Step 2: Git clone or download the repo from https://github.com/harubaru/waifu-diffusion/ and extract
(Make sure you have Git beforehand anyway, it will be needed)

Step 3: Go into the repo you downloaded and go to waifu-diffusion-main/models/ldm.
Create a folder called "stable-diffusion-v1". Rename your .ckpt file to "model.ckpt", and put it into that folder you've made

Step 4: Download the Gradio script and rename it to "kdiff.py" (save as all files)
https://pastebin.com/0cFdFC5V
mirror
Put kdiff.py into your /scripts folder

Step 5: Download the new environment.yaml and place it in waifu-diffusion-main, replacing the old one (save as all files)
https://pastebin.com/S9V49mvu
mirror

Step 6: Download Miniconda HERE. Download Miniconda 3

Step 7: Install Miniconda. Install for all users. Uncheck "Register Miniconda as the system Python 3.9" unless you want to

Step 8: Open Anaconda Prompt (miniconda3).
Go to the waifu-diffusion-main folder wherever you downloaded using "cd" to jump folders.
(Or just type "cd" followed by a space and then drag the folder into the Anaconda prompt.)

Step 9: If you have existing folders named "clip" and "taming-transformers" in /src, delete them

Step 10: Run the following command: "conda env create -f environment.yaml" and wait
(Make sure you are in the waifu-diffusion-main folder)

Step 11: Run the following command: "conda activate ldw"
(You will need to type this each time you open Miniconda before running scripts!)

Setup Complete

--USAGE--

  • Open Miniconda and navigate to waifu-diffusion
  • Type "conda activate ldw"
  • Type "python scripts/kdiff.py" and wait while it loads into ram and vram
  • After finishing, it should give you a LAN ip with a port such as '192.0.1:3288'
  • Open your browser and enter the address
  • You should now be in an interface with a txt2img and img2img tab
  • Have fun

--NOTES AND TIPS--

  • Build great prompts using the prompt builder
  • Check out the wiki https://wiki.installgentoo.com/wiki/Stable_Diffusion
  • Sampling iterations = how many images are made in a batch
  • Samples per iteration = how many images are rendered simultaneously. It shouldn't be greater than 1 or 2 unless you have very high vram
  • (img2img) Adjust Denoising Strength accordingly. Higher = more guided toward prompt, Lower = more guided toward image
    Anywhere between 0.3 and 0.9 is the sweet spot for prompts
  • If your output is a jumbled rainbow mess your image resolution is set TOO LOW
  • Feeding outputs back in using the same prompt with a weak strength multiple times can produce great results
  • The more keywords, the better. Look up guides for prompt tagging
  • It's recommended to have your prompts be at least 512 pixels in one dimension, or a 384x384 square at the smallest
    Anything smaller will have heavy artifacting
  • Try Low strength (0.3-0.4) + High CFG in img2img for interesting outputs
  • The seed for each generated result is in the output filename if you want to revisit it
  • You can use Japanese Unicode characters in prompts
  • This guide is designed for NVIDIA GPUs only, as stable diffusion requires cuda cores.
    AMD users should try https://rentry.org/kretard
  • A good tool for upscaling your outputs is Real-ESRGAN: https://github.com/xinntao/Real-ESRGAN
  • You can prune a v1.3 weight model using "python scripts/prune.py" in waifu-diffusion-main
    Pruning shrinks the file size to 2gb instead of 7. Output remains largely equivalent
    Comparison- https://i.postimg.cc/ZRKz4tJv/textprune.png
  • (Prune.py does not work on the new model, and does not matter as v1.4 is less heavy than v1.3 )
  • If your output is solid green, the half precision optimization may not be working for you:
  • GREEN SCREEN FIX:
    1- change the value of "default" to "full" in line 169 and 343 of kdiff.py
    2- delete ".half()" in line 89 of kdiff.py
    (Note: this will raise vram usage drastically)

--OLD MODEL--
The original v1.3 leaked model from July can be downloaded here:
https://drinkordiecdn.lol/sd-v1-3-full-ema.ckpt
Backup Download: https://download1980.mediafire.com/3nu6nlhy92ag/wnlyj8vikn2kpzn/sd-v1-3-full-ema.ckpt
Torrent Magnet: https://rentry.co/6gocs

--CHANGELOG--
8/22: renamed "gradio.py" to "kdiff.py"- previous name conflicted with Gradio package causing an AttributeError.
If you are having issues, please rename it

  • added fix to green screen of death
  • added official v1.4 model links
    8/23: Installation process now simplified vastly using new environment.yaml, original guide available at https://rentry.org/kretardold if problems arise
Edit
Pub: 23 Aug 2022 17:31 UTC
Views: 1983