This is a curated collection of up to date links and information. Everything else is put into one of the collections in Archives for archival or sorting purposes.

This collection is currently hosted on the SD Goldmine rentry, the SD Updates rentry (3), and Github

All rentry links are ended with a '.org' here and can be changed to a '.co'. Also, use incognito/private browsing when opening google links, else you lose your anonymity / someone may dox you


If you have information/files not on this list, have questions, or want to help, please contact me with details

Trip: questianon !!YbTGdICxQOw
Discord: malt#6065
Reddit: u/questianon

How to use this resource

The goldmine is ordered from surface-level content to deep level content. If you are a newcomer to Stable Diffusion, it's highly recommended to use start from the beginning.

To prevent redundancies, all items on this list are listed only once. To make sure you find what you're looking for, please use 'Ctrl + F' ('Cmd + F' on macOS).


Items on this list with a :cucumber: next to them represent my top pick for the category. This rating is entirely opinionated and represents what I have personally used and recommend, not what is necessarily "the best".


  1. Ckpts/hypernetworks/embeddings and things downloaded from here are not interently safe as of right now. They can be pickled/contain malicious code. Use your common sense and protect yourself as you would with any random download link you would see on the internet.
  2. Monitor your GPU temps and increase cooling and/or undervolt them if you need to. There have been claims of GPU issues due to high temps.


Don't forget to git pull to get a lot of new optimizations + updates. If SD breaks, go backward in commits until it starts working again


  • If on Windows:
    1. navigate to the webui directory through command prompt or git bash
      a. Git bash: right click > git bash here
      b. Command prompt: click the spot in the "url" between the folder and the down arrow and type "command prompt".
      c. If you don't know how to do this, open command prompt, type "cd [path to stable-diffusion-webui]" (you can get this by right clicking the folder in the "url" or holding shift + right clicking the stable-diffusion-webui folder)
    2. git pull
    3. pip install -r requirements.txt
  • If on Linux:
    1. go to the webui directory
    2. source ./venv/bin/activate
      a. if this doesn't work, run python -m venv venv beforehandww
    3. git pull
    4. pip install -r requirements.txt





Hypertextbook: This is a tutorial/commentary to guide a newcomer how to setup and use Stable Diffusion to its fullest. It's meant to be a supplementary to SD Goldmine:, but can be used without it.

Getting Started


AMD isn't as easy to setup as NVIDIA. I don't have an AMD so I don't know if these guides are good


Honestly I don't know what goes here. I'll add a guide if I remember


CPU is even less documented. I don't use my CPU for SD, so I don't know if these guides are good

Apple Silicon

Even less documented


Why are my outputs black? (Any card)

Add " --no-half-vae " (remove the quotations) to your commandline args in webui-user.bat

Why are my outputs black? (16xx card)

Add " --precision full --no-half " (remove the quotations) to your commandline args in webui-user.bat


These are repositories containing general AI knowledge





These are documents containing general prompting knowledge





Prompt Database




Tag Rankings

Tag Comparisons





Other Comparisons


Some extensions I came across that are probably in the webui extension browser




Plugins for External Apps

I didn't check the safety of these plugins, but you can check the open-source ones yourself





Unsorted but update was pushed

Prompt word/phrase collection:

  • Anon says that "8k, 4k, (highres:1.1), best quality, (masterpiece:1.3)" leads to nice details

According to an anon, the vae seems to be provide saturation/contrast and some line thickness (vae-ft-ema-56000-ema-pruned, Example (left with 56k, right with anything vae):

Japanese prompt generator:
Build your prompt (chinese):
NAI Prompts:
Prompt similarity tester:

Multilingual study:

Aesthetic value (imgs used to train SD):
Clip retrieval (text to CLIP to search):

Aesthetic scorer python script:
Another scorer:
Supposedly another one?:
Another Aesthetic Scorer:

NAI to webui translator (not 100% accurate):

Prompt editing parts of image but without using img2img/inpaint/prompt editing guide by anon:

Tip Dump:
Info dump of tips:
Outdated guide:
Tip for more photorealism:

  • TLDR: add noise to your img before img2img

NAI prompt tips:
NAI tips 2:

Masterpiece vs no masterpiece:

DPM-Solver Github:

Prompt: 1girl, pointy ears, white hair, medium hair, ahoge, hair between eyes, green eyes, medium:small breasts, cyberpunk, hair strand, dynamic angle, cute, wide hips, blush, sharp eyes, ear piercing, happy, hair highlights, multicoloured hair, cybersuit, cyber gas mask, spaceship computers, ai core, spaceship interior
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, animal ears, panties

Original image:
Steps: 50, Sampler: DDIM, CFG scale: 11, Seed: 3563250880, Size: 1024x1024, Model hash: cc024d46, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, First pass size: 512x512
NAI/SD mix at 0.25

Deep Danbooru:

Embedding tester:

Collection of Aesthetic Gradients:

Euler vs. Euler A:

According to anon: DPM++ should converge to result much much faster than Euler does. It should still converge to the same result though.

(info by anon) According to, the M samplers are better than the S samplers

Seed hunting:

  • By nai speedrun asuka imgur anon:
    >made something that might help the highres seed/prompt hunters out there. this mimics the "0x0" firstpass calculation and suggests lowres dimensions based on target higheres size. it also shows data about firstpass cropping as well. it's a single file so you can download and use offline. picrel.
    >view code and download from
    >for example you can run "firstpass" lowres batches for seed/prompt hunting, then use them in firstpass size to preserve composition when making highres.

Script for tagging (like in NAI) in AUTOMATIC's webui:
Danbooru Tag Exporter:
Tags (latest vers):
Basic gelbooru scraper:
Scrape danbooru images and tags like for e621 for tagging datasets:

Random Prompts:
Python script of generating random NSFW prompts:
Prompt randomizer:
Prompt generator:

  • apparently UMI uses these?
funny prompt gen that surprisingly works:
Unprompted extension released:


script that pulls prompt from and based on search terms:
randomize generation params for txt2img, works with other extensions:

Ideas for when you have none:

External masking for inpainting (no more brush or WIN magnifier):
anon: theres a commanda rg for adding basic painting, its '--gradio-img2img-tool'

Script collection:
Prompt matrix tutorial:
Animation Script:
Animation script 2:
Video Script:
Masking Script:
XYZ Grid Script:
Vector Graphics:
Prompt changing scripts:

Interpolation script (img2img + txt2img mix):

img2tiles script:
Script for outpainting:
Img2img animation script:

Google's interpolation script:

Deforum guide:
Animation Guide:
Rotoscope guide:
Chroma key after SD (fully prompted?):

Prompt travel:

More animation guide:
Animation guide + example for face:
Something for aninmation:

Animating faces by anon:

workflow looks like this:
>generate square portrait (i use 1024 for this example)
>create or find driving video
>crop driving video to square with ffmpeg, making sure to match the general distance from camera and face position(it does not do well with panning/zooming video or too much head movement)
>run thin-plate-spline-motion-model
>take result.mp4 and put it into Video2x (Waifu2x Caffe)
>put into flowframes for 60fps and webm

>if you don't care about upscaling it makes 256x256 pretty easily
>an extension for webui could probably be made by someone smarter than me, its a bit tedious right now with so many terminals

here is a pastebin of useful commands for my workflow

Another person who used it:

Img2img megalist + implementations:

Runway inpaint model:

Inpainting Tips:
Rentry version:

Artist inspiration:

Collection + Info:
Deforum (video animation):


ddetailer (object detection and auto-mask, helpful in fixing faces without manually masking):
Aesthetic Gradients:
Autocomplete Tags:
Prompt Randomizer:
Wildcard script + collection of wildcards:
Symmetric image script (latent mirroring):

macOS Finder right-click menu extension:
Search danbooru for tags directly in AUTOMATIC1111's webui extension:

  • Supports post IDs and all the normal Danbooru search syntax

Clip interrogator:
2 (apparently better than AUTO webui's interrogate):,

Enchancement Workflow with SD Upscale and inpainting by anon:

Upscaling + detail with SD Upscale:

Inpainting a face by anon:

send the picture to inpaint
modify the prompt to remove anything related to the background
add (face) to the prompt
slap a masking blob over the whole face
mask blur 10-16 (may have to adjust after), masked content: original, inpaint at full resolution checked, full resolution padding 0, sampling steps ~40-50, sampling method DDIM, width and height set to your original picture's full res
denoising strength .4-.5 if you want minor adjustments, .6-.7 if you want to really regenerate the entire masked area
let it rip

  • AUTOMATIC1111 webui modification that "compensates for the natural heavy-headedness of sd by adding a line from 0 sqrt(2) over the 0 74 token range (anon)" (evens out the token weights with a linear model, helps with the weight reset at 75 tokens (?))


Tutorial + how to use on ALL models (applies for the NAI vae too):

Booru tag scraping:

Creating fake animes:

Some observations by anon:

  1. Removing the spaces after the commas changed nothing
  2. Using "best_quality" instead of "best_quality" did change the image. masterpiece,best_quality,akai haato but she is a spider,blonde hair,blue eyes
  3. Changing all of the spaces into underscores changed the image somewhat substantially.
  4. Replacing those commas with spaces changed the image again.

Reduce bias of dreambooth models:

Landscape tutorial:

Anon's process:

  • Start with a prompt to get the general scenario you have in mind, here I was just looking to seggs the rrat so I used the embed here >>36743515 and described some of her character features to help steer the AI (in this case hair details, sharp teeth, her mouse ears and tail) as well as making her be naked and having vaginal sex
  • Generate images at a default resolution size (512 by X pixels) at a relative standard number of steps (30 in this case) and keep going until I find an image thats in a position I like (in this case seed 1920052602 gave me a very nice one to work with, as you can see here (embed))
  • Copy the seed of the image and paste it into the Seed field on the Web UI, which will maintain the composition of the image. I then double the resolution I was working with (so here I went from 512 by 768 to 1024 by 1536) and checkmark the "Hires fix option" underneath the width and height sliders. Hires fix is the secret sauce on the Web UI that helps maintain the detail of the image when you are upscaling the resolution of the image, and combined with that Upscale latent space option I mentioned earlier it really enhances the detail. With that done you can generate the upscaled image.
  • Play around with the weights of the prompt tags and add things to the negatives to fix little things like hair being too red, tummy too chubby, etc. You have to be careful with adding new tags because that can drastically change the image

Anon's booba process:
>you can generate a perfect barbie doll anatomy but more accurate chuba in curated
>then switch to full, img2img it on the same seed after blotching nipples on it like a caveman, and hit generate

Boooba v2:

  1. Generate whatever NSFW proompt you were thinking of using the CURATED model, yes, I know that sounds ridiculous (embed)
  2. Inpaint the naughty bits back in. You REALLY don't have to do a good job of this: (embed)
  3. Switch to Full after clicking "Save", set Strength to 0.69, Noise to 0.17, and make sure you copy/paste the same seed # back in. Hit Generate: (embed)
    Compare that with what you'd get trying to generate the same exact proompt using the Full model purely txt2img on the same seed: (embed)

Img2img rotoscoping tutorial by anon:

1. extract image sequence from video
2. testing prompt by using the 1st photo from the batch
3. find the suitable prompt that you want, the pose/sexual acts should be the same as the original to prevent weirdness
4. CFG Scale and Denoising Strength is very important
> Low CFG Scale will make your image less follow your prompt and make it more blurry and messy (i use 9-13)
> Denoising Strength determines the mix between your prompt and your image: 0 = Original input 1 = Only Prompt, nothing resemble of the input except the colors.
the interesting thing that i've noticed from Denoising strength is not linear, its behave more exponential ( my speculation is 0-0.6 = still reminds of the original 0.61-0.76 = starting to change 0.77-1 = change a lot )
5. sampler:
> Euler-a is quite nice, but lack of consistency between the step, adding/lower 1 step can change the entire photo
> Euler is better than euler-a in terms of consistency but requires more steps = longer generation time between each image
> DPM++ 2S a Karras is the best in quality (for me) but it is very slow, good for generate single image
> DDIM is the fastest and very useful for this case, 20-30 steps can produces a nice quality anime image.
6. test prompting into a batch of 4-6 to choosing a seed
7. Batch img2img
8. Assembling the generated images into video, i don't want to use eveyframes so i rendered into 2 frame steps and half the frame rate
9. Use Flowframes to interpolate the inbetween frame to match the original video frame rate.


File2prompt (I think it's multiple generations in a row?):

Models, Embeddings, and Hypernetworks

Downloads listed as "sus" or "might be pickled" generally mean there were 0 replies and not enough "information" (like training info). or, the replies indicated they were suspicious. I don't think any of the embeds/hypernets have had their code checked so they could all be malicious, but as far as I know no one has gotten pickled yet

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious:, Make sure to check them for pickles using a tool like or


Model pruner:

Collection of potentially dangerous models:
Huggingface collection:

  • V1 repo:
  • V2 repo:
    Direct Downloads (no login needed)
    Torrent for the 2.0 release by anon. License text included so that it's okay to distribute.
    "You can check that I didn't pickle these torrents by comparing the .ckpt file's hash against the SHA256 hashes on the official repo. No login needed."
  • anything.ckpt (v3 6569e224; v2.1 619c23f0), a Chinese finetune/training continuation of NAI, is released:

potential magnet that someone gave me



Little update, here's the link with all including VAE (second one)





another magnet on from the author

*Hrrzg style 768px:


Raspberry mix download by anon (not sure if safe):
Strawberry Mix (anon, safety caution):



  1. (Weighted Sum 0.05) Anything3 + SD1.5 = Temp1
  2. (Add Difference 1.0) Temp1 + F222 + SD1.5 = Temp2
  3. (Weighted Sum 0.2) Temp2 + TrinArt2_115000 = ThisModel

Anon's model for vampires(?):

My steps

Step 1:
>A : Anything-V3.0
>B : trinart2_step115000.ckpt [f1c7e952]
>C : stable-diffusion-v-1-4-original

A from
B from
C from

and I "Add Difference" at 0.45, and name as part1.ckpt

Step 2:
>A : part1.ckpt (What I made in Step 1)
>B: Cafe Unofficial Instagram TEST Model [50b987ae]

B is from

and I "Weighted Sum" at 0.5, and name it TrinArtMix.ckpt

Antler's Mix (didn't check for pickles)

Alternate mix, apparently? (didn't check for pickles)

((anything_0.95 + sd-1.5_0.05) + f222 - sd-1.5)_0.75 + trinart2_115000_0.25

RandoMix2 (didn't check for pickles)

RaptorBerry (didn't check for pickles)

NAI+SD+Trinart characters+Trinart+F222 (weighted sum, values less than 0.3):

"Ben Dover Mix"©®™ is my mix
if you're interested
follow this guide
The mix is done exactly the same way as berrymix
but with anythingv3 instead of nai
f222 instead of f111
and sd v1.5 instead of sd v1.4

AloeVera mix:

Nutmeg mix:

0.05 NAI + SD1.5
0.05 mix + f222
0.05 mix + r34
0.05 mix + SF
0.3 Anything + mix

Hyper-versatile SD model:

  • Made from Redshift Diffusion, Waifu Diffusion 1.2, Stable Diffusion 1.4, Novel AI, Yiffy, and Zack3D_Kinky-v1; capable of rendering humans, furries, landscapes, backgrounds, buildings, Disney style, painterly styles, and more

Hassan (has a few mixes, not sure if the dls are safe):


Weighted Sum @ 0.05 to make tempmodel1

A: Anything.V3, B: SD1.5, C: null

Add Difference @ 1.0 to make tempmodel2

A: tempmodel1, B: Zeipher F222, C: SD1.5

Weighted Sum @ 0.25 to make tempmodel3

A: tempmodel2, B: r34_e4, C: Null

Weighted Sum @ 0.20 to make FINAL MODEL

A: tempmodel3, B: NAI


Big collection of berry mixes: (

Super duper mixing cookbook from hdg (most updated):

EveryDream Trainer

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious:, Make sure to check them for pickles using a tool like or

Download + info + prompt templates:

Dreambooth Models:

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious:, Make sure to check them for pickles using a tool like or



If an embedding is >80mb, I mislabeled it and it's a hypernetwork

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious:, Make sure to check them for pickles using a tool like or

You can check .pts here for their training info using a text editor

Found on 4chan:


If a hypernetwork is <80mb, I mislabeled it and it's an embedding

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious:, Make sure to check them for pickles using a tool like or

Chinese telegram (uploaded by telegram anon): magnet:?xt=urn:btih:8cea1f404acfa11b5996d1f1a4af9e3ef2946be0&dn=ChatExport%5F2022-10-30&

I've made a full export of the Chinese Telegram channel.

It's 37 GB (~160 hypernetworks and a bunch of full models).
If you don't want all that, I would recommend downloading everything but the 'files' folder first (like 26 MB), then opening the html file to decide what you want.

Found on 4chan:

Found on Discord:

Colored eyes:

>Hey everyone , this hypernetwork was released by me (IWillRemember) (IWillRemember#1912 on discord) if you have any questions you can find me on discord!
>Did the Hn as a commission for a friend 😄
>I'm releasing an Hn to do better animation like glowing eyes, and a more slender face/upper body.
>The tags are : 
>detailed eyes, 
>(color) eyes  = ex: white eyes, blue eyes, etc etc
>Trained for 12k steps on a 80 ish images dataset
>You can use the Hn with a str of 1 without any problem.
>Happy prompting!

Aesthetic Gradients

Collection of Aesthetic Gradients:

Polar Resources


If you have one of these, please get it to me

Apparently there's a Google drive collection of downloads? (might be the korean site but mistyped)






Train stable diffusion model with Diffusers, Hivemind and Pytorch Lightning:

Official pytoch implementation of one shot text to image generation via contrastive prompt-tuning AKA 1 image embedding training:
DreamArtist extension changes code in the modules directory, which might not be safe

Dreambooth colab with custom model (old, so might be outdated):

Dreambooth thing in Japanese:

  • "Has aspect ratio bucketing, saving in fp16, etc."

GPU seems to determine training results (--low/med vram arg too)


Image tagger helper:

anything.ckpt comparisons
Old final-pruned: (embed)
v3-pruned-fp16: (embed)
v3-pruned-fp32: (embed)
v3 full or whatever: (embed)

Supposedly how to append model data without merging by anon:

x = (Final Dreambooth Model) - (Original Model)
filter x for x >= (Some Threshold)
out = (Model You Want To Merge It With) * (1 - M) + x * M

Model merging method that preserves weights:

Alternate model merging using by anon:

Dehydrate a model
Hydrate it back into a dreambooth
Merge with other stuff
run python dreamboothmodel.ckpt basemode.ckpt --output dreambooth_only to dehydrate
run 'python dreambooth_only target_model.ckpt --output output_model.ckpt' to hydrate it into another model.

3rd party git re basin:

Git rebasin pytorch:

  • Aesthetic Gradients:
  • Image aesthetic rating (?):
  • 1 img TI:
  • You can set a learning rate of "0.1:500, 0.01:1000, 0.001:10000" in textual inversion and it will follow the schedule
  • Tip: combining natural language sentences and tags can create a better training
  • Dreambooth on 2080ti 11GB (anon's guide):
  • Training a TI on 6gb (not sure if safe or even works, instructions by uploader anon):
    • Have xformers enabled.

      This diff does 2 things.

      1. enables cross attention optimizations during TI training. Voldy disabled the optimizations during training because he said it gave him bad results. However, if you use the InvokeAI optimization or xformers after the xformers fix it does not give you bad results anymore.
        This saves around 1.5GB vram with xformers
      2. unloads vae from VRAM during training. This is done in hypernetworks, and idk why it wasn't in the code for TI. It doesn't break anything and doesn't make anything worse.
        This saves around .2 GB VRAM

      After you apply this, turn on Move VAE and CLIP to RAM and Use cross attention optimizations while training

  • By anon:

    No idea if someone else will have a use for this but I needed to make it for myself since I can't get a hypernetwork trained regardless of what I do.

    That link above is a spreadsheet where you paste the hypernetwork_loss.csv data into A1 cell (A2 is where numbers should start). Then you can use M1 to set how many epochs of the most recent data you want to use for the red trendline (green is the same length but starting before red). Outlayer % is if you want to filter out extreme points 100% means all points are considered for trendline 95% filters out top and bottom 5 etc. Basically you can use this to see where the training started fucking up.

  • Anon's best:

    Normalized Layers
    Dropout Enabled
    XavierNormal (Not sure yet on this one. Normal or XavierUniform might be better)


Rate: 5e-5:1000, 5e-6:5000, 5e-7:20000, 5e-8:100000
Max Steps: 100,000

Vector guide by anon:

  • Another training guide:
  • Super simple embed guide by anon: Grab the high quality images, run them through the processor. Create an embedding called art by {artist}. Then train that same embedding with your processed images and set the learning rate to the following:0.1:500,0.05:1000,0.025:1500,0.001:2000,1e-5` Run it for 10k steps and you'll be good. No need for an entire hypernetwork.
  • Has training info and a tutorial for Asagi Igawa, Edjit, and Rouge the Bat embeds (RealYiffingFar#4510):
  • Anon's dreambooth guide:
    for a character, steps ~1500-2000
    checkpoint every 500 if you have the VRAM for it, else 99999 (ie: at the end), previews are shit don't even bother, 99999
    learning rate: 0.000001-0.000005, I don't have a reason for it, default is probably fine.
    instance prompt: [filewords], class prompt: 1girl, 20x regularisation images than training images, style matters, if you want anime get anime regularisation stuff.
    advanced: auto-adjust, batch size: 2, 8bit adam, fp16, don't cache latents (noticeable speedup if you do cache), train text, train EMA, gradient checkpointing, 2 gradient accumulation

none of this is concrete stuff I do every time, I just roll whatever works. the single most important stuff is to ensure you never tag anything that isn't in an image after cropping.
reduce the tags as much as humanly possible, ie:

legwear, black thighhighs, long socks, long thighhighs, pantyhose, stockings, etc.

to just:


try add images that both do and do not use all of your tags. if you have a pic with thighhighs, include at least one without, otherwise the tag is meaningless
if your training cannot establish a positive and negative for each tag it's gonna struggle to recall those features
have makima with yellow eyes? include some girl with similar features but red or blue eyes, or just an entirely different girl that's been accurately tagged with the negatives you need
in this way you can distinguish between features and emphasise stuff.


Training dataset with aesthetic ratings:


Check out and for other questions

What's all the new stuff?

Check here to see if your question is answered:

How do I set this up?

Refer to (has the "Asuka test")

What's the "Hello Asuka" test?

It's a basic test to see if you're able to get a 1:1 recreation with NAI and have everything set up properly. Coined after asuka anon and his efforts to recreate 1:1 NAI before all the updates.

Refer to

What is pickling/getting pickled?

ckpt files and python files can execute code. Getting pickled is when these files execute malicious code that infect your computer with malware. It's a memey/funny way of saying you got hacked.

I want to run this, but my computer is too bad. Is there any other way?
Check out one of these (I did not used most of these, so they might be unsafe to use):

How do I directly check AUTOMATIC1111's webui updates?

For a complete list of updates, go here:

What do I do if a new updates bricks/breaks my AUTOMATIC1111 webui installation?

Go to
See when the change happened that broke your install
Get the blue number on the right before the change
Open a command line/git bash to where you usually git pull (the root of your install)
'git checkout <blue number without these angled brackets>'
to reset your install, use 'git checkout master'

git checkout . will clean any changes you do

Another Guide:

What is...? (by anon)

What is a VAE?

Variational autoencoder, basically a "compressor" that can turn images into a smaller representation and then "decompress" them back to their original size. This is needed so you don't need tons of VRAM and processing power since the "diffusion" part is done in the smaller representation (I think). The newer SD 1.5 VAEs have been trained more and they can recreate some smaller details better.

What is pruning?

Removing unnecessary data (anything that isn't needed for image generation) from the model so that it takes less disk space and fits more easily into your VRAM

What is a pickle, not referring to the python file format? What is the meme surrounding this?

When the NAI model leaked people were scared that it might contain malicious code that could be executed when the model is loaded. People started making pickle memes because of the file format.

Why is some stuff tagged as being 'dangerous', and why does the StableDiffusion WebUI have a 'safe-unpickle' flag? -- I'm stuck on pytorch 1.11 so I have to disable this

Safe unpickling checks the pickle's code library imports against an approved list. If it tried to import something that isn't on the list it won't load it. This doesn't necessarily mean it's dangerous but you should be cautious. Some stuff might be able to slip through and execute arbitrary code on your computer.

Is the rentry stuff all written by one person or many?

There are many people maintaining different rentries.

What's the difference between embeds, hypernetworks, and dreambooths? What should I train?

I've tested a lot of the model modifications and here are my thoughts on them:
embeds: these are tiny files which find the best representation of whatever you're training them on in the base model. By far the most flexible option and will have very good results if the goal is to group or emphasize things the model already understands
hypernetworks: there are like instructions that slightly modify the result of the base model after each sampling step. They are quite powerful and work decently for everything I've tried (subjects, styles, compositions). The cons are they can't be easily combined like embeds. They are also harder to train because good parameters seem to vary wildly so a lot of experimentation is needed each time
dreambooth: modifies part of the model itself and is the only method which actually teaches it something new. Fast and accurate results but the weights for generating adjacent stuff will get trashed. These are gigantic and have the same cons as embeds




SDupdates 1 for v1 of sdupdates
SDupdates 2 for v2 of sdupdates
SDump 1 for stuff that's unsorted and/or I have no idea where to sort them
Soutdated 1 for stuff that's outdated

Pub: 07 Nov 2022 17:40 UTC
Edit: 26 Nov 2022 20:53 UTC
Views: 166924