SD RESOURCE GOLDMINE

Preamble

This is a curated collection of relevant links and information. Outdated information is put into one of the collections in Archives for archival or sorting purposes.

This collection is currently hosted on the SD Goldmine rentry, the SD Updates rentry (3), and Github

All rentry links are ended with a '.org' here and can be changed to a '.co'. Also, use incognito/private browsing when opening google links, else you lose your anonymity / someone may dox you

Contact

If you have information/files not on this list, have questions, or want to help, please contact me with details

Socials:
Trip: questianon !!YbTGdICxQOw
Discord: malt#6065
Reddit: u/questianon
Github: https://github.com/questianon
Twitter: https://twitter.com/questianon

How to use this resource

The goldmine is a general repository of links that might be helpful. If you are a newcomer to Stable Diffusion, it's highly recommended to use start from the beginning.

If something is missing from here that was here before, try checking https://rentry.org/soutdated1.

Emoji

Items on this list with a πŸ₯’ next to them represent my top pick for the category. This rating is entirely opinionated and represents what I have personally used and recommend, not what is necessarily "the best".

Warnings

  1. Ckpts/hypernetworks/embeddings and things downloaded from here are not interently safe as of right now. They can be pickled/contain malicious code. Use your common sense and protect yourself as you would with any random download link you would see on the internet.
  2. Monitor your GPU temps and increase cooling and/or undervolt them if you need to. There have been claims of GPU issues due to high temps.

Updates

Don't forget to git pull to get a lot of new optimizations + updates. If SD breaks, go backward in commits until it starts working again

Instructions:

  • If on Windows:
    1. navigate to the webui directory through command prompt or git bash
      a. Git bash: right click > git bash here
      b. Command prompt: click the spot in the "url" between the folder and the down arrow and type "command prompt".
      c. If you don't know how to do this, open command prompt, type "cd [path to stable-diffusion-webui]" (you can get this by right clicking the folder in the "url" or holding shift + right clicking the stable-diffusion-webui folder)
    2. git pull
    3. pip install -r requirements.txt
  • If on Linux:
    1. go to the webui directory
    2. source ./venv/bin/activate
      a. if this doesn't work, run python -m venv venv beforehandww
    3. git pull
    4. pip install -r requirements.txt

Localizations

French:


Contents

Tutorial

Getting Started

AMD

AMD isn't as easy to setup as NVIDIA.

Linux

Honestly I don't know what goes here. I'll add a guide if I remember

CPU

CPU is less documented.

Apple Silicon

Troubleshooting

Why are my outputs black? (Any card)

Add " --no-half-vae " (remove the quotations) to your commandline args in webui-user.bat

Why are my outputs black? (16xx card)

Add " --precision full --no-half " (remove the quotations) to your commandline args in webui-user.bat

Repositories

These are repositories containing general AI knowledge

English:

Korean:

Prompting

Documents

These are documents containing general prompting knowledge

English:

Chinese:

Japanese:

Korean:

Prompt Database

Tips

Negatives

Tags

Tag Rankings

Tag Comparisons

Comparisons:

Artists

Images:

Sites:

Other Comparisons

Extensions

Extensions are searchable through AUTOMATIC1111's extension browser

Wildcards

Collections

Text Files

Plugins for External Apps

I didn't check the safety of these plugins, but you can check the open-source ones yourself

Photoshop

Krita

GIMP

Blender




everything past here is UNSORTED

Prompt word/phrase collection: https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion/raw/main/ideas.txt
Japanese prompt generator: https://magic-generator.herokuapp.com/
Build your prompt (chinese): https://tags.novelai.dev/
NAI Prompts: https://seesaawiki.jp/nai_ch/d/%c8%c7%b8%a2%a5%ad%a5%e3%a5%e9%ba%c6%b8%bd/%a5%a2%a5%cb%a5%e1%b7%cf
Prompt similarity tester: https://gitlab.com/azamshato/simula

Multilingual study: https://jalonso.notion.site/Stable-Diffusion-Language-Comprehension-5209abc77a4f4f999ec6c9b4a48a9ca2

Aesthetic value (imgs used to train SD): https://laion-aesthetic.datasette.io/laion-aesthetic-6pls
Clip retrieval (text to CLIP to search): https://rom1504.github.io/clip-retrieval/

Aesthetic scorer python script: https://github.com/grexzen/SD-Chad
Another scorer: https://github.com/christophschuhmann/improved-aesthetic-predictor
Supposedly another one?: https://developer.huawei.com/consumer/en/hiai/engine/aesthetic-score
Another Aesthetic Scorer: https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer

Prompt editing parts of image but without using img2img/inpaint/prompt editing guide by anon: https://files.catbox.moe/fglywg.JPG

Tip Dump: https://rentry.org/robs-novel-ai-tips
Tips: https://github.com/TravelingRobot/NAI_Community_Research/wiki/NAI-Diffusion:-Various-Tips-&-Tricks
Info dump of tips: https://rentry.org/Learnings
Tip for more photorealism: https://www.reddit.com/r/StableDiffusion/comments/yhn6xx/comment/iuf1uxl/

  • TLDR: add noise to your img before img2img

NAI prompt tips: https://docs.novelai.net/image/promptmixing.html
NAI tips 2: https://docs.novelai.net/image/uifunctionalities.html

Masterpiece vs no masterpiece: https://desuarchive.org/g/thread/89714899#89715160

DPM-Solver Github: https://github.com/LuChengTHU/dpm-solver
Deep Danbooru: https://github.com/KichangKim/DeepDanbooru
Demo: https://huggingface.co/spaces/hysts/DeepDanbooru

Embedding tester: https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Seed hunting:

  • By nai speedrun asuka imgur anon:
    >made something that might help the highres seed/prompt hunters out there. this mimics the "0x0" firstpass calculation and suggests lowres dimensions based on target higheres size. it also shows data about firstpass cropping as well. it's a single file so you can download and use offline. picrel.
    >https://preyx.github.io/sd-scale-calc/
    >view code and download from
    >https://files.catbox.moe/8ml5et.html
    >for example you can run "firstpass" lowres batches for seed/prompt hunting, then use them in firstpass size to preserve composition when making highres.

Script for tagging (like in NAI) in AUTOMATIC's webui: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Danbooru Tag Exporter: https://sleazyfork.org/en/scripts/452976-danbooru-tags-select-to-export
Another: https://sleazyfork.org/en/scripts/453380-danbooru-tags-select-to-export-edited
Tags (latest vers): https://sleazyfork.org/en/scripts/453304-get-booru-tags-edited
Basic gelbooru scraper: https://pastebin.com/0yB9s338
Scrape danbooru images and tags like fetch.py for e621 for tagging datasets: https://github.com/JetBoom/boorutagparser
UMI AI: https://www.patreon.com/klokinator

Python script of generating random NSFW prompts: https://rentry.org/nsfw-random-prompt-gen
Prompt randomizer: https://github.com/adieyal/sd-dynamic-prompting
Prompt generator: https://github.com/h-a-te/prompt_generator

  • apparently UMI uses these?

StylePile: https://github.com/some9000/StylePile
script that pulls prompt from Krea.ai and Lexica.art based on search terms: https://github.com/Vetchems/sd-lexikrea
randomize generation params for txt2img, works with other extensions: https://github.com/stysmmaker/stable-diffusion-webui-randomize

Collection + Info: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions
Deforum (video animation): https://github.com/deforum-art/deforum-for-automatic1111-webui

Auto-SD-Krita: https://github.com/Interpause/auto-sd-paint-ext

Wildcard script + collection of wildcards: https://app.radicle.xyz/seeds/pine.radicle.garden/rad:git:hnrkcfpnw9hd5jb45b6qsqbr97eqcffjm7sby
Symmetric image script (latent mirroring): https://github.com/dfaker/SD-latent-mirroring

macOS Finder right-click menu extension: https://github.com/anastasiuspernat/UnderPillow
Search danbooru for tags directly in AUTOMATIC1111's webui extension: https://github.com/stysmmaker/stable-diffusion-webui-booru-prompt

  • Supports post IDs and all the normal Danbooru search syntax

Clip interrogator: https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb
2 (apparently better than AUTO webui's interrogate): https://huggingface.co/spaces/pharma/CLIP-Interrogator, https://github.com/pharmapsychotic/clip-interrogator

  • AUTOMATIC1111 webui modification that "compensates for the natural heavy-headedness of sd by adding a line from 0 sqrt(2) over the 0 74 token range (anon)" (evens out the token weights with a linear model, helps with the weight reset at 75 tokens (?))

VAEs

Tutorial + how to use on ALL models (applies for the NAI vae too): https://www.reddit.com/r/StableDiffusion/comments/yaknek/you_can_use_the_new_vae_on_old_models_as_well_for/

Booru tag scraping:

Creating fake animes:

Models, Embeddings, and Hypernetworks

Downloads listed as "sus" or "might be pickled" generally mean there were 0 replies and not enough "information" (like training info). or, the replies indicated they were suspicious. I don't think any of the embeds/hypernets have had their code checked so they could all be malicious, but as far as I know no one has gotten pickled yet

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner or https://github.com/lopho/pickle_inspector

Models*

Model pruner: https://github.com/harubaru/waifu-diffusion/blob/bc626e8/scripts/prune.py

πŸ₯’ CivitAI, an art-focused model repo alternative to HF: https://civitai.com/
πŸ₯’ HuggingFace, the standard model repo: https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads
Collection of potentially dangerous models: https://bt4g.org/search/.ckpt/1

EveryDream Trainer

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner or https://github.com/lopho/pickle_inspector

Download + info + prompt templates: https://github.com/victorchall/EveryDream-trainer

Dreambooth Models:

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner or https://github.com/lopho/pickle_inspector

Links:

Embeddings

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner or https://github.com/lopho/pickle_inspector

You can check .pts here for their training info using a text editor

Hypernetworks:

If a hypernetwork is <80mb, I mislabeled it and it's an embedding

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner or https://github.com/lopho/pickle_inspector

https://arca.live/b/aiart/60927159?p=1
https://arca.live/b/hypernetworks/60927228?category=%EA%B3%B5%EC%9C%A0&p=2
Senri Gan: https://files.catbox.moe/8sqmeh.rar
Big dumpy of a lot of hypernets (has slime too): https://mega.nz/folder/kPdBkT5a#5iOXPnrSfVNU7F2puaOx0w
Collection of asanuggy + maybe some more: https://mega.nz/folder/Uf1jFTiT#TZe4d41knlvkO1yg4MYL2A
Collection: https://mega.nz/folder/fVhXRLCK#4vRO9xVuME0FGg3N56joMA

Chinese telegram (uploaded by telegram anon): magnet:?xt=urn:btih:8cea1f404acfa11b5996d1f1a4af9e3ef2946be0&dn=ChatExport%5F2022-10-30&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

I've made a full export of the Chinese Telegram channel.

It's 37 GB (~160 hypernetworks and a bunch of full models).
If you don't want all that, I would recommend downloading everything but the 'files' folder first (like 26 MB), then opening the html file to decide what you want.

Mogubro + constant updates (dead): https://mega.nz/folder/hlZAwara#wgLPMSb4lbo7TKyCI1TGvQ

Training

Train stable diffusion model with Diffusers, Hivemind and Pytorch Lightning: https://github.com/Mikubill/naifu-diffusion

Extension: https://github.com/d8ahazard/sd_dreambooth_extension

Image tagger helper: https://github.com/nub2927/image_tagger/

Euler vs. Euler A: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2017#discussioncomment-4021588

anything.ckpt comparisons
Old final-pruned: https://files.catbox.moe/i2zu0b.png (embed)
v3-pruned-fp16: https://files.catbox.moe/k1tvgy.png (embed)
v3-pruned-fp32: https://files.catbox.moe/cfmpu3.png (embed)
v3 full or whatever: https://files.catbox.moe/t9jn7y.png (embed)

Alternatives

Browser

I want to run this, but my computer is too bad. Is there any other way?
Check out one of these (I did not use most of these, so I can't attest to their safety):

FAQ

Check out https://rentry.org/sdupdates and https://rentry.org/sdupdates2 for other questions
https://rentry.org/sdg_FAQ

What's all the new stuff?

Check here to see if your question is answered:

What's the "Hello Asuka" test?

It's a flawed test to see if you're able to get a 1:1 recreation with NAI and have everything set up properly. Coined after asuka anon and his efforts to recreate 1:1 NAI before all the updates. Deviations arise with certain systems.

Refer to

What is pickling/getting pickled?

ckpt files and python files can execute code. Getting pickled is when these files execute malicious code that infect your computer with malware. It's a memey way of saying you got hacked.

How do I directly check AUTOMATIC1111's webui updates?

For a complete list of updates, go here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master

What do I do if a new updates bricks/breaks my AUTOMATIC1111 webui installation?

Go to https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master
See when the change happened that broke your install
Get the blue number on the right before the change
Open a command line/git bash to where you usually git pull (the root of your install)
'git checkout <blue number without these angled brackets>'
to reset your install, use 'git checkout master'

git checkout . will clean any changes you do

Another Guide: https://rentry.org/git_retard

What is...? (by anon)

What is a VAE?

Variational autoencoder, basically a "compressor" that can turn images into a smaller representation and then "decompress" them back to their original size. This is needed so you don't need tons of VRAM and processing power since the "diffusion" part is done in the smaller representation (I think). The newer SD 1.5 VAEs have been trained more and they can recreate some smaller details better.

What is pruning?

Removing unnecessary data (anything that isn't needed for image generation) from the model so that it takes less disk space and fits more easily into your VRAM

What is a pickle, not referring to the python file format? What is the meme surrounding this?

When the NAI model leaked people were scared that it might contain malicious code that could be executed when the model is loaded. People started making pickle memes because of the file format.

Why is some stuff tagged as being 'dangerous', and why does the StableDiffusion WebUI have a 'safe-unpickle' flag? -- I'm stuck on pytorch 1.11 so I have to disable this

Safe unpickling checks the pickle's code library imports against an approved list. If it tried to import something that isn't on the list it won't load it. This doesn't necessarily mean it's dangerous but you should be cautious. Some stuff might be able to slip through and execute arbitrary code on your computer.

Is the rentry stuff all written by one person or many?

There are many people maintaining different rentries.

What's the difference between embeds, hypernetworks, and dreambooths? What should I train?
Anon:

I've tested a lot of the model modifications and here are my thoughts on them:
embeds: these are tiny files which find the best representation of whatever you're training them on in the base model. By far the most flexible option and will have very good results if the goal is to group or emphasize things the model already understands
hypernetworks: there are like instructions that slightly modify the result of the base model after each sampling step. They are quite powerful and work decently for everything I've tried (subjects, styles, compositions). The cons are they can't be easily combined like embeds. They are also harder to train because good parameters seem to vary wildly so a lot of experimentation is needed each time
dreambooth: modifies part of the model itself and is the only method which actually teaches it something new. Fast and accurate results but the weights for generating adjacent stuff will get trashed. These are gigantic and have the same cons as embeds

Misc

Links: https://rentry.org/sdg-link
https://catbox.moe/

Archives

SDupdates 1 for v1 of sdupdates
SDupdates 2 for v2 of sdupdates
SDump 1 for stuff that's unsorted and/or I have no idea where to sort them
Soutdated 1 for stuff that's outdated

Edit
Pub: 07 Nov 2022 17:40 UTC
Edit: 04 May 2023 18:00 UTC
Views: 338954