-> V3 (news): https://rentry.org/sdupdates3

-> V3 (non-news): https://rentry.org/sdgoldmine

SD RESOURCE GOLDMINE 2

Version 2 of https://www.rentry.org/sdupdates. If something you want isn't here, it's probably in the other rentry

Warnings:

  1. Ckpts/hypernetworks/embeddings are not interently safe as of right now. They can be pickled/contain malicious code. Use your common sense and protect yourself as you would with any random download link you would see on the internet.
  2. Monitor your GPU temps and increase cooling and/or undervolt them if you need to. There have been claims of GPU issues due to high temps.

Links are dying. If you happen to have a file listed in https://rentry.org/sdupdates#deadmissing or that's not on this list, please get it to me.

There is now a github for this rentry: https://github.com/questianon/sdupdates. This should allow you to see changes across the different updates

Changelog: everything except discord and reddit

All rentry links are ended with a '.org' here and can be changed to a '.co'. Also, use incognito/private browsing when opening google links, else you lose your anonymity / someone may dox you

Contact

If you have information/files (e.g. embed) not on this list, have questions, or want to help, please contact me with details

Socials:
Trip: questianon !!YbTGdICxQOw
Discord: malt#6065
Reddit: u/questianon
Github: https://github.com/questianon
Twitter: https://twitter.com/questianon)

NEWSFEED

Don't forget to git pull to get a lot of new optimizations + updates, if SD breaks go backward in commits until it starts working again

Instructions:

  • If on Windows:
    1. navigate to the webui directory through command prompt or git bash
      a. Git bash: right click > git bash here
      b. Command prompt: click the spot in the "url" between the folder and the down arrow and type "command prompt".
      c. If you don't know how to do this, open command prompt, type "cd [path to stable-diffusion-webui]" (you can get this by right clicking the folder in the "url" or holding shift + right clicking the stable-diffusion-webui folder)
    2. git pull
    3. pip install -r requirements.txt
  • If on Linux:
    1. go to the webui directory
    2. source ./venv/bin/activate
      a. if this doesn't work, run python -m venv venv beforehand
    3. git pull
    4. pip install -r requirements.txt

SDupdates the trilogy is happening soon

11/10

11/9+11/8

11/8+11/7

11/7

11/5 continued+11/6

11/5

11/4

11/3

11/2

11/1

10/31

Prompting

Google Docs with a prompt list/ranking/general info for waifu creation:
https://docs.google.com/document/d/1Vw-OCUKNJHKZi7chUtjpDEIus112XBVSYHIATKi1q7s/edit?usp=sharing
Ranked and calssibied danbooru tags, sorted by amount of pictures, and ranked by type and quality (WD): https://cdn.discordapp.com/attachments/1029235713989951578/1038585908934483999/Kopi_af_WAIFU_MASTER_PROMPT_DANBOORU_LIST.pdf
Anon's prompt collection: https://mega.nz/folder/VHwF1Yga#sJhxeTuPKODgpN5h1ALTQg
Tag effects on img: https://pastebin.com/GurXf9a4

  • Anon says that "8k, 4k, (highres:1.1), best quality, (masterpiece:1.3)" leads to nice details

Japanese prompt collection: http://yaraon-blog.com/archives/225884
Chinese scroll collection: https://note.com/sa1p/
GREAT CHINESE TOME OF PROMPTING KNOWLEDGE AND WISDOM 101 GUIDE: https://docs.qq.com/doc/DWHl3am5Zb05QbGVs

GREAT CHINESE SCROLLS OF PROMPTING ON 1.5: HEIGHTENED LEVELS OF KNOWLEDGE AND WISDOM 101: https://docs.qq.com/doc/DWGh4QnZBVlJYRkly
GREAT CHINESE ENCYCLOPEDIA OF PROMPTING ON GENERAL KNOWLEDGE: SPOOKY EDITION: https://docs.qq.com/doc/DWEpNdERNbnBRZWNL
GREAT TOME OF MAGICAL ESSENCE: https://docs.qq.com/doc/DSHBGRmRUUURjVmNM
GREAT CHINESE TOME V1.7 OF MASTERY IN THE ARCANE PROMPTING ARTS
GREAT JAPANESE TOME OF MASTERMINDING ANIME PROMPTS AND IMAGINATIVE AI MACHINATIONS 101 GUIDE https://p1atdev.notion.site/021f27001f37435aacf3c84f2bc093b5?p=f9d8c61c4ed8471a9ca0d701d80f9e28

Using emoticons and emojis can be really good: https://docs.google.com/spreadsheets/d/1aTYr4723NSPZul6AVYOX56CVA0YP3qPos8rg4RwVIzA/edit#gid=1453378351
🕊💥😱😲😶🙄 leads to https://files.catbox.moe/biy755.png
🌷🕊🗓👋😛👋 leads to https://files.catbox.moe/7khxe0.png
spoken squiggle: https://twitter.com/AI_Illust_000/status/1588838369593032706
Anon: The emoji performs well in terms of semantic accuracy because it is only one character.

Database of prompts: https://publicprompts.art/

Hololive prompts: https://rentry.org/3y56t

Big negative: https://pastes.io/x9crpin0pq
Fat negative: https://www.reddit.com/r/WaifuDiffusion/comments/yrpovu/img2img_from_my_own_loose_sketch/

Krea AI prompt database: https://github.com/krea-ai/open-prompts
Prompt search: https://www.ptsearch.info/home/
Another search: http://novelai.io/
4chan prompt search: https://desuarchive.org/g/search/text/masterpiece%20high%20quality/
Prompt book: https://openart.ai/promptbook
Prompt word/phrase collection: https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion/raw/main/ideas.txt

Dynamic prompts: https://github.com/adieyal/sd-dynamic-prompts

Japanese prompt generator: https://magic-generator.herokuapp.com/
Build your prompt (chinese): https://tags.novelai.dev/
NAI Prompts: https://seesaawiki.jp/nai_ch/d/%c8%c7%b8%a2%a5%ad%a5%e3%a5%e9%ba%c6%b8%bd/%a5%a2%a5%cb%a5%e1%b7%cf

Japanese wiki: https://seesaawiki.jp/nai_ch/

Korean wiki: https://arca.live/b/aiart/60392904
Korean wiki 2: https://arca.live/b/aiart/60466181

Multilingual info by anon:

CLIP can't really understand Chinese (or anything other than english). (maybe some of the characters are bond to certain concept for the reason I don't know)
But some of emoji and Chinese/Japanese character is meaningful for CLIP. Like イカ, which means squid in Japanese. You will get something like squid if you put these character in prompt.
anon2: yeah,because the CLIP can not understand Chinese , so here the “natural language” I should translate some depiction to English.

Multilingual study: https://jalonso.notion.site/Stable-Diffusion-Language-Comprehension-5209abc77a4f4f999ec6c9b4a48a9ca2

Aesthetic value: https://laion-aesthetic.datasette.io/laion-aesthetic-6pls

NAI to webui translator (not 100% accurate): https://seesaawiki.jp/nai_ch/d/%a5%d7%a5%ed%a5%f3%a5%d7%a5%c8%ca%d1%b4%b9

Prompt editing parts of image but without using img2img/inpaint/prompt editing guide by anon: https://files.catbox.moe/fglywg.JPG

Tip Dump: https://rentry.org/robs-novel-ai-tips
Tips: https://github.com/TravelingRobot/NAI_Community_Research/wiki/NAI-Diffusion:-Various-Tips-&-Tricks
Info dump of tips: https://rentry.org/Learnings
Outdated guide: https://rentry.co/8vaaa
Tip for more photorealism: https://www.reddit.com/r/StableDiffusion/comments/yhn6xx/comment/iuf1uxl/

  • TLDR: add noise to your img before img2img

NAI prompt tips: https://docs.novelai.net/image/promptmixing.html
NAI tips 2: https://docs.novelai.net/image/uifunctionalities.html

SD 1.4 vs 1.5: https://postimg.cc/gallery/mhvWsnx
NAI vs Anything: https://www.bilibili.com/read/cv19603218
Model merge comparisons: https://files.catbox.moe/rcxqsi.png
Model merge: https://files.catbox.moe/vgv44j.jpg
Some sampler comparisons: https://www.reddit.com/r/StableDiffusion/comments/xmwcrx/a_comparison_between_8_samplers_for_5_different/
More comparisons: https://files.catbox.moe/csrjt5.jpg
More: https://i.redd.it/o440iq04ocy91.jpg (https://www.reddit.com/r/StableDiffusion/comments/ynt7ap/another_new_sampler_steps_comparison/)
More: https://i.redd.it/ck4ujoz2k6y91.jpg (https://www.reddit.com/r/StableDiffusion/comments/yn2yp2/automatic1111_added_more_samplers_so_heres_a/)
Every sampler comparison: https://files.catbox.moe/u2d6mf.png

Prompt: 1girl, pointy ears, white hair, medium hair, ahoge, hair between eyes, green eyes, medium:small breasts, cyberpunk, hair strand, dynamic angle, cute, wide hips, blush, sharp eyes, ear piercing, happy, hair highlights, multicoloured hair, cybersuit, cyber gas mask, spaceship computers, ai core, spaceship interior
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, animal ears, panties

Original image:
Steps: 50, Sampler: DDIM, CFG scale: 11, Seed: 3563250880, Size: 1024x1024, Model hash: cc024d46, Denoising strength: 0.57, Clip skip: 2, ENSD: 31337, First pass size: 512x512
NAI/SD mix at 0.25

New samplers: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/4363
New vs. DDIM: https://files.catbox.moe/5hfl9h.png

f222 comparisons: https://desuarchive.org/g/search/text/f222/filter/text/start/2022-11-01/

Deep Danbooru: https://github.com/KichangKim/DeepDanbooru
Demo: https://huggingface.co/spaces/hysts/DeepDanbooru

Embedding tester: https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Euler vs. Euler A: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2017#discussioncomment-4021588

According to anon: DPM++ should converge to result much much faster than Euler does. It should still converge to the same result though.

Seed hunting:

  • By nai speedrun asuka imgur anon:
    >made something that might help the highres seed/prompt hunters out there. this mimics the "0x0" firstpass calculation and suggests lowres dimensions based on target higheres size. it also shows data about firstpass cropping as well. it's a single file so you can download and use offline. picrel.
    >https://preyx.github.io/sd-scale-calc/
    >view code and download from
    >https://files.catbox.moe/8ml5et.html
    >for example you can run "firstpass" lowres batches for seed/prompt hunting, then use them in firstpass size to preserve composition when making highres.

Script for tagging (like in NAI) in AUTOMATIC's webui: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Danbooru Tag Exporter: https://sleazyfork.org/en/scripts/452976-danbooru-tags-select-to-export
Another: https://sleazyfork.org/en/scripts/453380-danbooru-tags-select-to-export-edited
Tags (latest vers): https://sleazyfork.org/en/scripts/453304-get-booru-tags-edited
Basic gelbooru scraper: https://pastebin.com/0yB9s338
UMI AI:

Random Prompts: https://rentry.org/randomprompts
Python script of generating random NSFW prompts: https://rentry.org/nsfw-random-prompt-gen
Prompt randomizer: https://github.com/adieyal/sd-dynamic-prompting
Prompt generator: https://github.com/h-a-te/prompt_generator

  • apparently UMI uses these?

http://dalle2-prompt-generator.s3-website-us-west-2.amazonaws.com/
https://randomwordgenerator.com/
funny prompt gen that surprisingly works: https://www.grc.com/passwords.htm
Unprompted extension released: https://github.com/ThereforeGames/unprompted

  • HAS ADS
  • Wildcards on steroids
  • Powerful scripting language
  • Can create templates out of booru tags
  • Can make shortcodes
  • "You can pull text from files, set up your own variables, process text through conditional functions, and so much more "

StylePile: https://github.com/some9000/StylePile
script that pulls prompt from Krea.ai and Lexica.art based on search terms: https://github.com/Vetchems/sd-lexikrea
randomize generation params for txt2img, works with other extensions: https://github.com/stysmmaker/stable-diffusion-webui-randomize

Ideas for when you have none: https://pentoprint.org/first-line-generator/
Colors: http://colorcode.is/search?q=pantone

https://www.painthua.com/ - New GUI focusing on Inpainting and Outpainting

I didn't check the safety of these plugins, but they're open source, so you can check them yourself
Photoshop/Krita plugin (free): https://internationaltd.github.io/defuser/ (kinda new and currently only 2 stars on github)

Photoshop: https://github.com/Invary/IvyPhotoshopDiffusion
Photoshop plugin (paid, not open source): https://www.flyingdog.de/sd/
Krita plugins (free):

GIMP:
https://github.com/blueturtleai/gimp-stable-diffusion

Blender:
https://github.com/carson-katri/dream-textures
https://github.com/benrugg/AI-Render

External masking: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script
anon: theres a commanda rg for adding basic painting, its '--gradio-img2img-tool'

Script collection: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts
Prompt matrix tutorial: https://gigazine.net/gsc_news/en/20220909-automatic1111-stable-diffusion-webui-prompt-matrix/
Animation Script: https://github.com/amotile/stable-diffusion-studio
Animation script 2: https://github.com/Animator-Anon/Animator
Video Script: https://github.com/memes-forever/Stable-diffusion-webui-video
Masking Script: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script
XYZ Grid Script: https://github.com/xrpgame/xyz_plot_script
Vector Graphics: https://github.com/GeorgLegato/Txt2Vectorgraphics/blob/main/txt2vectorgfx.py
Txt2mask: https://github.com/ThereforeGames/txt2mask
Prompt changing scripts:

Interpolation script (img2img + txt2img mix): https://github.com/DiceOwl/StableDiffusionStuff

img2tiles script: https://github.com/arcanite24/img2tiles
Script for outpainting: https://github.com/TKoestlerx/sdexperiments
Img2img animation script: https://github.com/Animator-Anon/Animator/blob/main/animation_v6.py

Google's interpolation script: https://github.com/google-research/frame-interpolation

Animation Guide: https://rentry.org/AnimAnon#introduction
Chroma key after SD (fully prompted?): https://files.catbox.moe/d27xdl.gif

More animation guide: https://www.reddit.com/r/StableDiffusion/comments/ymwk53/better_frame_consistency/

Animating faces by anon:

workflow looks like this:
>generate square portrait (i use 1024 for this example)
>create or find driving video
>crop driving video to square with ffmpeg, making sure to match the general distance from camera and face position(it does not do well with panning/zooming video or too much head movement)
>run thin-plate-spline-motion-model
>take result.mp4 and put it into Video2x (Waifu2x Caffe)
>put into flowframes for 60fps and webm

>if you don't care about upscaling it makes 256x256 pretty easily
>an extension for webui could probably be made by someone smarter than me, its a bit tedious right now with so many terminals

here is a pastebin of useful commands for my workflow
https://pastebin.com/6Y6ZK8PN

Another person who used it: https://www.reddit.com/r/StableDiffusion/comments/ynejta/stable_diffusion_animated_with_thinplate_spline/

Giffusion tutorial:

>git clone https://github.com/megvii-research/ECCV2022-RIFE
this is my git diff on requirements.txt to work alone side webui python environment
>-torch==1.6.0
>+torch==1.11.0
>-torchvision==0.7.0
>+torchvision==0.12.0
pip3 install -r requirements.txt
the most important part
>download the pretrained HD models and copy them into the same folder as inference_video.py
get ffmpeg for your OS (if you dont have ffmpeg it is good to have besides this app)
>https://ffmpeg.org/download.html
after this need to make sure ffmpeg.exe is in your PATH variable
then i typed
>python inference_video.py --exp=1 --video=1666410530347641.mp4 --fps=60
and it created the mp4 you see (i converted it into webm with this command)
>ffmpeg.exe -i 1666410530347641.mp4 1666410530347641.webm
Example: https://i.4cdn.org/h/1666414810239191.webm

Img2img megalist + implementations: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2940

Runway inpaint model: https://huggingface.co/runwayml/stable-diffusion-inpainting

Inpainting Tips: https://www.pixiv.net/en/artworks/102083584
Rentry version: https://rentry.org/inpainting-guide-SD

Extensions:
Artist inspiration: https://github.com/yfszzx/stable-diffusion-webui-inspiration

History: https://github.com/yfszzx/stable-diffusion-webui-images-browser
Collection + Info: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions
Deforum (video animation): https://github.com/deforum-art/deforum-for-automatic1111-webui

Auto-SD-Krita: https://github.com/Interpause/auto-sd-paint-ext

ddetailer (object detection and auto-mask, helpful in fixing faces without manually masking): https://github.com/dustysys/ddetailer
Aesthetic Gradients: https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients
Aesthetic Scorer: https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer
Autocomplete Tags: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Prompt Randomizer: https://github.com/adieyal/sd-dynamic-prompting
Wildcards: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/
Wildcard script + collection of wildcards: https://app.radicle.xyz/seeds/pine.radicle.garden/rad:git:hnrkcfpnw9hd5jb45b6qsqbr97eqcffjm7sby
Symmetric image script (latent mirroring): https://github.com/dfaker/SD-latent-mirroring

Clip interrogator: https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb
2 (apparently better than AUTO webui's interrogate): https://huggingface.co/spaces/pharma/CLIP-Interrogator, https://github.com/pharmapsychotic/clip-interrogator

Enchancement Workflow by anon: https://pastebin.com/8WVyDxt9

Inpainting a face by anon:

send the picture to inpaint
modify the prompt to remove anything related to the background
add (face) to the prompt
slap a masking blob over the whole face
mask blur 10-16 (may have to adjust after), masked content: original, inpaint at full resolution checked, full resolution padding 0, sampling steps ~40-50, sampling method DDIM, width and height set to your original picture's full res
denoising strength .4-.5 if you want minor adjustments, .6-.7 if you want to really regenerate the entire masked area
let it rip

  • AUTOMATIC1111 webui modification that "compensates for the natural heavy-headedness of sd by adding a line from 0 sqrt(2) over the 0 74 token range (anon)" (evens out the token weights with a linear model, helps with the weight reset at 75 tokens (?))

VAEs

Tutorial + how to use on ALL models (applies for the NAI vae too): https://www.reddit.com/r/StableDiffusion/comments/yaknek/you_can_use_the_new_vae_on_old_models_as_well_for/

Booru tag scraping:

Wildcards:

Wildcard extension: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/

Someone's prompt using a lot of wildcards: Positive Prompt: (masterpiece:1.4), (best quality:1.4), [[nsfw]], highres, large breasts, 1girl, detailed clothing, skimpy clothing, haircolor, haircut, hairlength, eyecolor, cum, ((fetish)), lingerie, lingeriestate, ((sexacts)), sexposition,

Artist Comparisons (may or may not work with NAI):

Some comparisons of 421 different artists in different models.

Anon's list of comparisons:

Creating fake animes:

Some observations by anon:

  1. Removing the spaces after the commas changed nothing
  2. Using "best_quality" instead of "best_quality" did change the image. masterpiece,best_quality,akai haato but she is a spider,blonde hair,blue eyes
  3. Changing all of the spaces into underscores changed the image somewhat substantially.
  4. Replacing those commas with spaces changed the image again.

Reduce bias of dreambooth models: https://www.reddit.com/r/StableDiffusion/comments/ygyq2j/a_simple_method_explained_in_the_comments_to/?utm_source=share&utm_medium=web2x&context=3

Landscape tutorial: https://www.reddit.com/r/StableDiffusion/comments/yivokx/landscape_matte_painting_with_stable_diffusion/

Anon's process:

  • Start with a prompt to get the general scenario you have in mind, here I was just looking to seggs the rrat so I used the embed here >>36743515 and described some of her character features to help steer the AI (in this case hair details, sharp teeth, her mouse ears and tail) as well as making her be naked and having vaginal sex
  • Generate images at a default resolution size (512 by X pixels) at a relative standard number of steps (30 in this case) and keep going until I find an image thats in a position I like (in this case seed 1920052602 gave me a very nice one to work with, as you can see here https://files.catbox.moe/8z2mua.png (embed))
  • Copy the seed of the image and paste it into the Seed field on the Web UI, which will maintain the composition of the image. I then double the resolution I was working with (so here I went from 512 by 768 to 1024 by 1536) and checkmark the "Hires fix option" underneath the width and height sliders. Hires fix is the secret sauce on the Web UI that helps maintain the detail of the image when you are upscaling the resolution of the image, and combined with that Upscale latent space option I mentioned earlier it really enhances the detail. With that done you can generate the upscaled image.
  • Play around with the weights of the prompt tags and add things to the negatives to fix little things like hair being too red, tummy too chubby, etc. You have to be careful with adding new tags because that can drastically change the image

Anon's booba process:
>you can generate a perfect barbie doll anatomy but more accurate chuba in curated
>then switch to full, img2img it on the same seed after blotching nipples on it like a caveman, and hit generate

Boooba v2:

  1. Generate whatever NSFW proompt you were thinking of using the CURATED model, yes, I know that sounds ridiculous https://files.catbox.moe/b6k6i4.png (embed)
  2. Inpaint the naughty bits back in. You REALLY don't have to do a good job of this: https://files.catbox.moe/yegjrw.png (embed)
  3. Switch to Full after clicking "Save", set Strength to 0.69, Noise to 0.17, and make sure you copy/paste the same seed # back in. Hit Generate: https://files.catbox.moe/8dag88.png (embed)
    Compare that with what you'd get trying to generate the same exact proompt using the Full model purely txt2img on the same seed: https://files.catbox.moe/ytfdv3.png (embed)

Models, Embeddings, and Hypernetworks

Downloads listed as "sus" or "might be pickled" generally mean there were 0 replies and not enough "information" (like training info). or, the replies indicated they were suspicious. I don't think any of the embeds/hypernets have had their code checked so they could all be malicious, but as far as I know no one has gotten pickled yet

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Models*

Collection of potentially dangerous models: https://bt4g.org/search/.ckpt/1
Collection?: https://civitai.com/

potential magnet that someone gave me

magnet:?xt=urn:btih:689c0fe075ab4c7b6c08a6f1e633491d41186860&dn=Anything-V3.0.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fvibe.sleepyinternetfun.xyz%3a1738%2fannounce&tr=udp%3a%2f%2ftracker1.bt.moack.co.kr%3a80%2fannounce&tr=udp%3a%2f%2ftracker.zerobytes.xyz%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.theoks.net%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.swateam.org.uk%3a2710%2fannounce&tr=udp%3a%2f%2ftracker.publictracker.xyz%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.monitorit4.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.encrypted-data.xyz%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.dler.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.army%3a6969%2fannounce&tr=http%3a%2f%2ftracker.bt4g.com%3a2095%2fannounce

Mag2

1
2
3
4
Little update, here's the link with all including VAE (second one)
magnet:?xt=urn:btih:689C0FE075AB4C7B6C08A6F1E633491D41186860&dn=Anything-V3.0.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce

magnet:?xt=urn:btih:E87B1537A4B5B5F2E23236C55F2F2F0A0BB6EA4A&dn=NAI-Anything&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce

Mag3

magnet:?xt=urn:btih:689c0fe075ab4c7b6c08a6f1e633491d41186860&dn=Anything-V3.0.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.zerobytes.xyz%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.monitorit4.me%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.moeking.me%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.encrypted-data.xyz%3A1337%2Fannounce&tr=udp%3A%2F%2Ftracker.dler.org%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.army%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.altrosky.nl%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce

from: https://bt4g.org/magnet/689c0fe075ab4c7b6c08a6f1e633491d41186860

another magnet on https://rentry.org/sdmodels from the author

Berrymix Recipe
Rentry: https://rentry.org/berrymix

Make sure you have all the models needed, Novel Ai, Stable Diffusion 1.4, Zeipher F111, and r34_e4. All but Novel Ai can be downloaded from HERE
Open the Checkpoint Merger tab in the web ui
Set the Primary Model (A) to Novel Ai
Set the Secondary Model (B) to Zeipher F111
Set the Tertiary Model (C) to Stable Diffusion 1.4
Enter in a name that you will recognize
Set the Multiplier (M) slider all the way to the right, at "1"
Select "Add Difference"
Click "Run" and wait for the process to complete
Now set the Primary Model (A) to the new checkpoint you just made (Close the cmd and restart the webui, then refresh the web page if you have issues with the new checkpoint not being an option in the drop down)
Set the Secondary Model (B) to r34_e4
Ignore Tertiary Model (C) (I've tested it, it wont change anything)
Enter in the name of the final mix, something like "Berry's Mix" ;)
Set Multiplier (M) to "0.25"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
At the top left of the web page click the "Stable Diffusion Checkpoint" drop down and select the Berry's Mix.ckpt (or whatever you named it) it should have the hash "[c7d3154b]"

Fruit Salad Mix (might not be worth it to make)

Fruit Salad Guide

Recipe for the "Fruit Salad" checkpoint:
Make sure you have all the models needed, Novel Ai, Stable Diffusion 1.5, Trinart-11500, Zeipher F111, r34_e4, Gape_60 and Yiffy.
Open the Checkpoint Merger tab in the web ui
Set the Primary Model (A) to Novel Ai
Set the Secondary Model (B) to Yiffy e18
Set the Tertiary Model (C) to Stable Diffusion 1.4
Enter in a name that you will recognize
Set the Multiplier (M) slider to the left, at "0.1698765"
Select "Add Difference"
Click "Run" and wait for the process to complete
Now set the Primary Model (A) to the new checkpoint you just made (Close the cmd and restart the webui, then refresh the web page if you have issues with the new checkpoint not being an option in the drop down)
Set the Secondary Model (B) to r34_e4
Set the Tertiary Model (C) to Zeipher F111 (I've tested it, it changes EVERYTHING)
Set Multiplier (M) to "0.56565656"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
Now download a previous version of WebUI, which still contains the "Inverse Sigmoid" option for checkpoint merger.
Now set the Primary Model (A) to the new checkpoint you just made
Set the Secondary Model (B) to Trinart-11500
Set Multiplier (M) to "0.768932"
Select "Inverse Sigmoid"(this is kind of like Sigmoid but inverted)
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
Now set the Primary Model (A) to the new checkpoint you just made.
Set the Secondary Model (B) to SD 1.5
Set the Tertiary Model (C) to Gape_60
Set the name of the final mix to something you will remember, like "Fruit's Salad" ;)
Set Multiplier (M) to "1"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
At the top left of the web page click the "Stable Diffusion Checkpoint" drop down and select the Fruit's Salad.ckpt (or whatever you named it)

Modified berry mix

model is custom berrymix-style mix:
Add Difference (A=NAI, B=F222, C=SD 1.4, M=1.0) => tmp.ckpt
Mixed Sum (A=tmp.ckpt, B=SD 1.5, M=0.2) (M might've been 0.25, but i think it was 0.2)
Examples: https://files.catbox.moe/3pscvp.png, https://i.4cdn.org/g/1667680791817462.png

Blueberry Mix

NAI + SD1.5 Weighted Sum @ 0.25 NAI-v1.5-0.25
NAI-v1.5-0.25 + F222 + SD1.5 Difference @ 1.0
berry-lite berry-lite + r34_e4 Weighted Sum @ 0.15 -> blueberrymix

Blackberry Mix (Blueberry swapped for older SD and Zeipher female anatomy models)

NAI + SD1.4 Weighted Sum @ 0.25 NAI-v1.4-0.25
NAI-v1.4-0.25 + F111 + SD1.4 Difference @ 1.0
berry-lite berry-lite + r34_e4 Weighted Sum @ 0.15 -> blackberrymix

magnet:?xt=urn:btih:eb085b3e22310a338e6ea00172cb887c10c54cbc&dn=cafe-instagram-unofficial-test-epoch-9-140k-images-fp32.ckpt&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Fopentor.org%3A2710&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp%3A%2F%2Ftracker.blackunicorn.xyz%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969

EveryDream Trainer

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Download + info + prompt templates: https://github.com/victorchall/EveryDream-trainer

Dreambooth Models:

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Links:

Embeddings

If an embedding is >80mb, I mislabeled it and it's a hypernetwork

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

You can check .pts here for their training info using a text editor

Found on 4chan:

NOTE TO MYSELF, ADD THAT PONY EMBEDDING THAT I DOWNLOADING 2 WEEKS AGO

Found on Discord:

Found on Reddit:

Hypernetworks:

If a hypernetwork is <80mb, I mislabeled it and it's an embedding

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

All files in this section (ckpt, vae, pt, hypernetwork, embedding, etc) can be malicious: https://docs.python.org/3/library/pickle.html, https://huggingface.co/docs/hub/security-pickle. Make sure to check them for pickles using a tool like https://github.com/zxix/stable-diffusion-pickle-scanner

Chinese telegram (uploaded by telegram anon): magnet:?xt=urn:btih:8cea1f404acfa11b5996d1f1a4af9e3ef2946be0&dn=ChatExport%5F2022-10-30&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

I've made a full export of the Chinese Telegram channel.

It's 37 GB (~160 hypernetworks and a bunch of full models).
If you don't want all that, I would recommend downloading everything but the 'files' folder first (like 26 MB), then opening the html file to decide what you want.

Found on 4chan:

Found on Korean Site of Wisdom (WIP):

Found on Discord:

Colored eyes:

>Hey everyone , this hypernetwork was released by me (IWillRemember) (IWillRemember#1912 on discord) if you have any questions you can find me on discord!
>
>Did the Hn as a commission for a friend 😄
>
>I'm releasing an Hn to do better animation like glowing eyes, and a more slender face/upper body.
>
>The tags are : 
>detailed eyes, 
>(color) eyes  = ex: white eyes, blue eyes, etc etc
>collarbone
>
>Trained for 12k steps on a 80 ish images dataset
>
>You can use the Hn with a str of 1 without any problem.
>
>Happy prompting!
>
>Example: https://media.discordapp.net/attachments/1023082871822503966/1038115846222008392/00162-3940698197-masterpiece_highest_quality_digital_art_1girl_on_back_detailed_eyes_perfect_face_detailed_face_breasts_white_hair_yell.png?width=648&height=702
>
>https://mega.nz/file/dHFwmaxS#NQhMPjT4TElPXX_YAZhTsFrQ36PDJhpWFm9BcHU_BO4

Aesthetic Gradients

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Polar Resources

DEAD/MISSING

If you have one of these, please get it to me

Apparently there's a Google drive collection of downloads? (might be the korean site but mistyped)

Dreambooth:

Embed:

Hypernetworks:

Datasets:

Training dataset with aesthetic ratings: https://github.com/JD-P/simulacra-aesthetic-captions

Training

Dreambooth colab with custom model (old, so might be outdated): https://desuarchive.org/g/thread/89140837/#89140895

Extension: https://github.com/d8ahazard/sd_dreambooth_extension

anything.ckpt comparisons
Old final-pruned: https://files.catbox.moe/i2zu0b.png (embed)
v3-pruned-fp16: https://files.catbox.moe/k1tvgy.png (embed)
v3-pruned-fp32: https://files.catbox.moe/cfmpu3.png (embed)
v3 full or whatever: https://files.catbox.moe/t9jn7y.png (embed)

Supposedly how to append model data without merging by anon:

x = (Final Dreambooth Model) - (Original Model)
filter x for x >= (Some Threshold)
out = (Model You Want To Merge It With) * (1 - M) + x * M

Model merging method: https://github.com/samuela/git-re-basin

set TORCH_COMMAND="pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116"
* Runs at about 1.7it/s with a 9700k., 50% CPU usage, full VRAM usage, approx 10-40% GPU, it goes up and down.
  • Training a TI on 6gb (not sure if safe or even works, instructions by uploader anon): https://pastebin.com/iFwvy5Gy
    • Have xformers enabled.

      This diff does 2 things.

      1. enables cross attention optimizations during TI training. Voldy disabled the optimizations during training because he said it gave him bad results. However, if you use the InvokeAI optimization or xformers after the xformers fix it does not give you bad results anymore.
        This saves around 1.5GB vram with xformers
      2. unloads vae from VRAM during training. This is done in hypernetworks, and idk why it wasn't in the code for TI. It doesn't break anything and doesn't make anything worse.
        This saves around .2 GB VRAM

      After you apply this, turn on Move VAE and CLIP to RAM and Use cross attention optimizations while training

  • By anon:

    No idea if someone else will have a use for this but I needed to make it for myself since I can't get a hypernetwork trained regardless of what I do.

    https://mega.nz/file/LDwi1bab#xrGkqJ9m-IsqsTQNixVkeWrGw2HvmAr_fx9FxNhrrbY

    That link above is a spreadsheet where you paste the hypernetwork_loss.csv data into A1 cell (A2 is where numbers should start). Then you can use M1 to set how many epochs of the most recent data you want to use for the red trendline (green is the same length but starting before red). Outlayer % is if you want to filter out extreme points 100% means all points are considered for trendline 95% filters out top and bottom 5 etc. Basically you can use this to see where the training started fucking up.

  • Anon's best:

    Creation:
    1,2,1
    Normalized Layers
    Dropout Enabled
    Swish
    XavierNormal (Not sure yet on this one. Normal or XavierUniform might be better)

Training:

Rate: 5e-5:1000, 5e-6:5000, 5e-7:20000, 5e-8:100000
Max Steps: 100,000

  • Anon's Guide:
  1. Having good text tags on the images is rather important. This means laboriously going through and adding tags to the BLIP tags and editing the BLIP tags as well, and often manually describing the image. Fortunately my dataset had only like...30 images total, so I was able to knock it out pretty quick, but I can imagine it being completely obnoxious for a 500 image gallery. Although I guess you could argue that strict prompt accuracy becomes less important as you have more training examples. Again, if they would just add an automatic deepdanbooru option alongside the BLIP for preprocessing that would take away 99% of the work.
  2. Vectors. Honestly I started out making my embedding at 8, it was shit. 16, still shit but better. 20, pretty good, and I was like fuck it let's go to 50 and that was even better still. IDK. I don't think you can go much higher though if you want to use your tag anyway where but the very beginning of a 75 token block. I had heard that having more tokens = needing more images and also overfitting but I did not find this to be the case.
  3. The other major thing I needed to do is make a character.txt for textual inversion. For whatever reason, the textual inversion templates literally have NO character/costume template. The closest thing is subject which is very far off and very bad. Thus, I had to write my own: https://files.catbox.moe/wbat5x.txt
  4. Yeah for whatever reason the VAE completely fries and fucks up any embedding training and you can only find this from reading comments on 4chan or in the issues list of the github. The unload VAE when training DOES NOT WORK for textual embedding. Again, I don't know why. Thus it is absolutely 100% stone cold essential to rename or move your vae then relaunch everything before you do any textual inversion training. Don't forget to put it back afterwards (and relaunch again) because without the VAE everything is a blurry mess and faces and like sloth from the goonies.

So all told, this is the process:

  1. Get a dataset of images together. Use the preprocess tab and the BLIP and the split and flip and all that.
  2. Laboriously go through EVERY SINGLE IMAGE YOU JUST MADE while simultaneously looking at their text file BLIP descriptions and updating them with the booru tags or deepdanbooru tags (which you have to have manually gotten ahead of time if you want them), and making sure the BLIP caption is at least roughly correct, and deleting any image which doesn't feature your character after the cropping operation if it was too big. EVERY. SINGLE. IMAGE. OAJRPIOANJROPIanrpianfrpianra
  3. Now that the hard parts over, just make your embedding using the make embedding page. Choose some vector amount (I mean I did good with 50 whatever), set girl as your initialization or whatever's appropriate.
  4. Go to train page and get training. Everything on the page is pretty self explanatory. I used 5e-02:2000, 5e-03:4000, 5e-04:6000, 5e-05 for the learning rate schedule but you can fool around. Make sure the prompt template file is pointed at an appropriate template file for what you're trying to do like the character one I made, and then just train. Honestly, it shouldn't take more than 10k steps which goes by pretty quick even with batch size 1.

OH and btw, obviously use https://github.com/mikf/gallery-dl to scrape your image dataset from whichever boorus you like. Don't forget the --write-tags flag!

Vector guide by anon:
Think of vectors per token as the number of individual traits the ai will associate with your embedding. For something like "coffee cup", this is going to be pretty low generally, like 1-4. For something more like an artist's style, you're going to want it to be higher, like 12-24. You could go for more, but you're really eating into your token budget on prompts then.

Its also worth noting, the higher the count, the more images and more varied images you're going to want.

You want the ai to find things that are consistent thematics in your image. If you use a small sample size, and all your images just happen to have girls in a bikini, or all with blonde hair, that trait might get attributed to your prompt, and suddenly "coffee cup" always turns out blonde girls in bikinis too.

  • Another training guide: https://www.reddit.com/r/stablediffusion/comments/y91luo
  • Super simple embed guide by anon: Grab the high quality images, run them through the processor. Create an embedding called art by {artist}. Then train that same embedding with your processed images and set the learning rate to the following:0.1:500,0.05:1000,0.025:1500,0.001:2000,1e-5` Run it for 10k steps and you'll be good. No need for an entire hypernetwork.

Datasets

FAQ

Check out https://rentry.org/sdupdates for other questions
https://rentry.org/sdg_FAQ

What's all the new stuff?

Check here to see if your question is answered:

How do I set this up?

Refer to https://rentry.org/nai-speedrun (has the "Asuka test")
Easy guide: https://rentry.org/3okso
Standard guide: https://rentry.org/voldy
Detailed guide: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2017
Paperspace: https://rentry.org/865dy

AMD Guide: https://rentry.org/sdamd

What's the "Hello Asuka" test?

It's a basic test to see if you're able to get a 1:1 recreation with NAI and have everything set up properly. Coined after asuka anon and his efforts to recreate 1:1 NAI before all the updates.

Refer to

What is pickling/getting pickled?

ckpt files and python files can execute code. Getting pickled is when these files execute malicious code that infect your computer with malware. It's a memey/funny way of saying you got hacked.

I want to run this, but my computer is too bad. Is there any other way?
Check out one of these (I did not used most of these, so they might be unsafe to use):

How do I directly check AUTOMATIC1111's webui updates?

For a complete list of updates, go here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master

What do I do if a new updates bricks/breaks my AUTOMATIC1111 webui installation?

Go to https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/master
See when the change happened that broke your install
Get the blue number on the right before the change
Open a command line/git bash to where you usually git pull (the root of your install)
'git checkout <blue number without these angled brackets>'
to reset your install, use 'git checkout master'

What are embeddings?

https://textual-inversion.github.io/
More info in sdupdates (1), various wikis, and various rentrys
TLDR: it mashes tokens until it finds things in the model that matches most with the training images

What is...?

What is a VAE?

Variational autoencoder, basically a "compressor" that can turn images into a smaller representation and then "decompress" them back to their original size. This is needed so you don't need tons of VRAM and processing power since the "diffusion" part is done in the smaller representation (I think). The newer SD 1.5 VAEs have been trained more and they can recreate some smaller details better.

What is pruning?

Removing unnecessary data (anything that isn't needed for image generation) from the model so that it takes less disk space and fits more easily into your VRAM

What is a pickle, not referring to the python file format? What is the meme surrounding this?

When the NAI model leaked people were scared that it might contain malicious code that could be executed when the model is loaded. People started making pickle memes because of the file format.

Why is some stuff tagged as being 'dangerous', and why does the StableDiffusion WebUI have a 'safe-unpickle' flag? -- I'm stuck on pytorch 1.11 so I have to disable this

Safe unpickling checks the pickle's code library imports against an approved list. If it tried to import something that isn't on the list it won't load it. This doesn't necessarily mean it's dangerous but you should be cautious. Some stuff might be able to slip through and execute arbitrary code on your computer.

Is the rentry stuff all written by one person or many?

There are many people maintaining different rentries.

How do I run NSFW models in colab?
Info by anon, I'm not sure if it works:

!gdown https://huggingface.co/Daswer123/asdasdadsa/resolve/main/novelai_full.ckpt -O /content/stable-diffusion-webui/models/Stable-diffusion/nai.ckpt
!gdown https://huggingface.co/Daswer123/asdasdadsa/resolve/main/animevae.pt -O /content/stable-diffusion-webui/models/Stable-diffusion/nai.vae.pt
!gdown https://huggingface.co/Daswer123/asdasdadsa/raw/main/nai.yaml -O /content/stable-diffusion-webui/models/Stable-diffusion/nai.yaml
!gdown https://huggingface.co/Daswer123/asdasdadsa/resolve/main/v2.pt -O /content/stable-diffusion-webui/v2.pt
!gdown https://huggingface.co/Daswer123/asdasdadsa/raw/main/v2enable.py -O /content/stable-diffusion-webui/scripts/v2enable.py

!wget https://pastebin.com/raw/ukEFznTb (embed) -O /content/stable-diffusion-webui/ui-config.json

now paste that above in a codeblock and excute it or use this colab https://colab.research.google.com/drive/1zN99ZouzlYObQaPfzwbgwJr6ZqcpYK5-?usp=sharing#scrollTo=SSP9suJcjlWs and select nai in the model dropdown

**How do I get better performance on my 4090?""

Check this: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2449

How does token padding work?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2192
Help improve coherency when having words that overlap token set bondaries.

When a token is at a boundary of 75 and it is not a comma, the last n tokens (n which can be spec. in config) are checked to see if any are a comma. If one is, tokens are padded starting from that comma to the next mult. of 75, and the tokens that were there before get moved into the next token set.

ex: {[74]=comma,[75]=orange},{[76]=hair} -> {[74]=comma,[75]=padding},{[76]=orange, [77]=hair}

How do I use highres fix?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-upscale

From anon:

Enable Upscale latent space image when doing hires. fix in settings
Enable Highres. fix
Keep the width and height to 0 in the section that opened up, play around with denoising from the default 0.7
Adjust sliders from 512x512 to your desired higher resolution
wait longer for each generation
learn to use SD Upscale script in img2img because it can be better sometimes

Why doesn't model merging work?

Make sure you have enough ram (2 models that are 2 gb requires 4 gb of ram). Increase your page file if necessary

Why is my downloaded embedding so bad?

Make sure that you're using the correct model. Anon says,"Embeddings and hypernetworks only work reliably on the model they were trained for". Also, play around with weighting your embedding

Which depickler should I use?

By anon: The webui scanner is very basic. The zxix scanner is much more thorough but it is not clear that it provides truly comprehensive protection. The lopho scanner is targeted directly at Torch models (great!) but is not a standalone script (not great!).

Why are some of my prompts outputting black images?

Add " --no-half-vae " (remove the quotations) to your commandline args in webui-user.bat

What's the difference between embeds, hypernetworks, and dreambooths? What should I train?
Anon:

I've tested a lot of the model modifications and here are my thoughts on them:
embeds: these are tiny files which find the best representation of whatever you're training them on in the base model. By far the most flexible option and will have very good results if the goal is to group or emphasize things the model already understands
hypernetworks: there are like instructions that slightly modify the result of the base model after each sampling step. They are quite powerful and work decently for everything I've tried (subjects, styles, compositions). The cons are they can't be easily combined like embeds. They are also harder to train because good parameters seem to vary wildly so a lot of experimentation is needed each time
dreambooth: modifies part of the model itself and is the only method which actually teaches it something new. Fast and accurate results but the weights for generating adjacent stuff will get trashed. These are gigantic and have the same cons as embeds

Info:

Boorus:

Upscalers:

Guide to upscaling well: https://desuarchive.org/g/thread/89518099/#89518607

RunwayML: https://github.com/runwayml/stable-diffusion

GPU comparison: https://docs.google.com/spreadsheets/d/1Zlv4UFiciSgmJZncCujuXKHwc4BcxbjbSBg71-SdeNk/edit#gid=0

Undervolt guide: https://www.reddit.com/r/nvidia/comments/tw8j6r/there_are_two_methods_people_follow_when/

frames to video: https://github.com/jamriska/ebsynth

Paperspace guide: https://rentry.org/865dy

More twitter anons:
https://twitter.com/knshtyk/media
https://twitter.com/NAIoppailoli
https://twitter.com/PorchedArt
https://twitter.com/FEDERALOFFICER
https://twitter.com/Elf_Anon
https://twitter.com/ElfBreasts
https://twitter.com/BluMeino
https://twitter.com/Lisandra_brave
https://twitter.com/nadanainone
https://twitter.com/Rahmeljackson
https://twitter.com/dproompter
https://twitter.com/Kw0337
https://twitter.com/AICoomer
https://twitter.com/mommyartfactory
https://twitter.com/ai_sneed
https://twitter.com/YoucefN30829772
https://twitter.com/KLaknatullah
https://twitter.com/spee321
https://twitter.com/EyeAI_
https://twitter.com/S37030315
https://twitter.com/ElfieAi
https://twitter.com/Headstacker
https://twitter.com/RaincoatWasted
https://twitter.com/epitaphtoadog
https://twitter.com/Merkurial_Mika
https://twitter.com/FizzleDorf
https://twitter.com/ai_hexcrawl

Sigmoid math: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2658

maybe you can edit this to allow 8gb DB training: https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb, https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth

Linux help: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/3525
Linux thing: https://github.com/pytorch/examples/tree/main/mnist

cheap GPU thing: https://www.coreweave.com/gpu-cloud-pricing

Karras: https://arxiv.org/pdf/2206.00364.pdf

something that uses DirectML (like tensorflow): https://www.travelneil.com/stable-diffusion-windows-amd.html

archive: https://archive.ph/

windows xformers: https://www.reddit.com/r/StableDiffusion/comments/xz26lq/automatic1111_xformers_cross_attention_with_on/

4chan archives:
https://archive.alice.al/vt/
https://warosu.org/lit/
desuarchive.org/
https://archived.moe/

prompting thing: https://www.reddit.com/r/StableDiffusion/comments/yirl1c/we_are_pleased_to_announce_the_launch_of_the/

something for ML: https://github.com/geohot/tinygrad

something for better centralization but probably will be unpopular compared to auto:
https://github.com/Sygil-Dev/nataili
(apparently according to anon) creators of
https://aqualxx.github.io/stable-ui/

Old guide for DB: https://techpp.com/2022/10/10/how-to-train-stable-diffusion-ai-dreambooth/

colab thing: https://github.com/JingShing/ImageAI-colab-ver

drama thing that doesn't really change anything: https://www.reddit.com/r/StableDiffusion/comments/ylrop5/automatic1111_there_is_no_requirement_to_make/

cpu img2img that is slow and filtered: https://huggingface.co/spaces/fffiloni/stable-diffusion-img2img

Debian linux guide: http

anon:

sentences are superior. SD interprets text relating concepts together based on proximity and order. If you batch just tags together you'll find that adding one tag can change the whole prompt and is affected by words near it, whereas with a sentence the changes are more subtle and gradual because words have seperators between them. It makes sense to just make good sentences in the first place
https://en.m.wikipedia.org/wiki/BERT_(language_model)
it tries to simulate how synapses in the human brain connect when looking at text. The weights it places or the words it interpret have different values based on interpretation. But practically it means if you prompt

leather, collar,,,,,,,,,,,,,,,,,,,,,,,,,,,,,cheese 

The filler text just means you're likely to get a leather collar with cheese separately. To get a leather cheese collar its going to be very difficult since these concepts are far removed from one another.

Resource thing: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory

ai voice self train: https://github.com/neonbjb/tortoise-tts

ERNIE creators (about: Awesome pre-trained models toolkit based on PaddlePaddle. (400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving)): https://github.com/PaddlePaddle/PaddleHub

info: https://www.reddit.com/r/StableDiffusion/comments/yjwuls/demystifying_prompting_what_you_need_to_know/

something dreambooth someone used it so I add it here: https://github.com/kanewallmann/Dreambooth-Stable-Diffusion

youtuber that helped people understand webui: https://www.youtube.com/channel/UCEIMmQErvGDLXpmlzp7L-yg

something: https://theinpaint.com/

cool mmd to img2img while staying consistent: https://twitter.com/nZk1015/status/1589317103383113729

Pretty nice explanation on VAE: https://youtu.be/hoLmBFEsHHg

api info: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/3734

poser: https://app.posemy.art/

safetensor thing?: https://ctftime.org/writeup/16723

Nvidia overclocking, undervolting, benching, etc: https://github.com/LunarPSD/NvidiaOverclocking/blob/main/Nvidia%20Overclocking.md

Pytorch and torch vision compiled with different CUDA versions:

source venv/bin/activate
pip install -I pytorch==11.3

Upscaler and img viewer: https://sourceforge.net/projects/jpegview/

anon info for 4090: anyone on a 4090

in launch.py replace line 127

With

torch_command = os.environ.get('TORCH_COMMAND', "pip install torch1.11.0+cu115 torchvision0.12.0+cu115 --extra-index-url https://download.pytorch.org/whl/cu115")

as this build of Torch is built with CUDA 11.8 whereas the one that AUTOMATIC uses by defualt is not and will cause issue.

depicklage: https://github.com/trailofbits/fickling

cool ai speech synthesis where they talk to each other: https://infiniteconversation.com/

Torch is not able to use GPU error fix by anon:
Activate the webui VENV and Force reinstall torch with cuda
If you are on windows:
open cmd on
stable-diffusion-webui\venv\Scripts
type activate
type pip install -U torch1.12.1+cu113 torchvision0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

If you want you can install with a more recent torch/cuda too
pip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117

Webm maker: https://github.com/dfaker/WebmGenerator

danbooru: https://paperswithcode.com/dataset/danbooru2020

tutorial on how SD works: https://www.youtube.com/watch?v=1CIpzeNxIhU

Funny read between people who understand that ai will boost artist workflows and someone who looked at it once and became the master of ethics lol: https://desuarchive.org/g/thread/89694458#89697202
pt2 of the debacle: https://desuarchive.org/g/thread/89697696#89699922

dataset of over 238000 synthetic images generated with AI models rated on their aesthetic value: https://github.com/JD-P/simulacra-aesthetic-captions

Online Deepdanbooru: https://huggingface.co/spaces/hysts/DeepDanbooru
Online ddb v2 (has tags): https://huggingface.co/spaces/NoCrypt/DeepDanbooru_string

Hall of Fame

automatic1111

Miscellaneous

Guide to installing NAI by anon:

Guide for locally installing NovelAI:

DOWNLOADING:
get NovelAI from magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak
---or from picrel >>35097470 (Dead)
get MinGW/Git from https://git-scm.com/download/win (or direct download: https://github.com/git-for-windows/git/releases/download/v2.37.3.windows.1/Git-2.37.3-64-bit.exe
---When installing Git, make sure to select "Git Bash here".
get Python from https://www.python.org/downloads/windows/ (or direct download: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
---When installing Python, make sure to select "add to Path".

Start Git, and type "git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui";
When you do this, it will create a folder called \stable-diffusion-webui\ (by default, in C:\Users\Username\)

When you download the torrent:
1. In \stableckpt\animefull-final-pruned\ rename 'model.ckpt' to 'final-pruned.ckpt'
---Note: the 'animefull' model is the best overall, but struggles at recognizing and outputting VTuber characters in particular. For a better model in that regard, use the 'SFW' model instead; despite the name, it is not SFW-limited.
2. Move 'final-prunded.ckpt' (the file you just renamed) and animevae.pt from \stableckpt\ to \stable-diffusion-webui\models\Stable-diffusion
3. Rename 'animevae.pt' (the file you just moved) to 'final-pruned.vae.pt'
4. Create a new folder \stable-diffusion-webui\models\hypernetworks\ and move all of the modules (which end in .pt) from \stableckpt\modules\modules\ (in the NovelAI)

Whenever you want to run NovelAI from your computer, run webui-user.bat. The first time will initialize, which may cause some errors.
If there are no errors, it will give you a web address (usually http://127.0.0.1:7860/ ). This is the WebUI, from which you run NovelAI using your computer's disk.
The usual errors will require you to edit the .bat--do this in Notepad or n++.
There will be a line of code beginning with "COMMANDLINE_ARGS=". Add "--precision full --no-half" directly after this, on the same line. (remove the quotations)
If you get an error about "--skip-torch-cuda-test", add it as well (making the line "--skip-torch-cuda-test --precision full --no-half").

After you started the .bat and got the WebUI loaded, go to Settings and scroll to Stable Diffusion. Set the checkpoint to final-pruned and the hypernetwork of your choice.

Old drama for archival, stuff in here might be wrong/worded poorly: https://gist.github.com/questianon/e2b424a1ca1acb330bd3b99d053ba68f

Edit
Pub: 01 Nov 2022 09:02 UTC
Edit: 21 Nov 2022 20:08 UTC
Views: 310244