Outdated stuff that is either easier to do now, has a better repo elsewhere, or was made on older versions of SD:

Random stuff/info from random places

Random Prompts: https://rentry.org/randomprompts

http://dalle2-prompt-generator.s3-website-us-west-2.amazonaws.com/
https://randomwordgenerator.com/
funny prompt gen that surprisingly works: https://www.grc.com/passwords.htm
Unprompted extension released: https://github.com/ThereforeGames/unprompted

Ideas for when you have none: https://pentoprint.org/first-line-generator/
Colors: http://colorcode.is/search?q=pantone

vae info that probably doesn't apply to many models: According to an anon, the vae seems to be provide saturation/contrast and some line thickness (vae-ft-ema-56000-ema-pruned, https://huggingface.co/stabilityai/sd-vae-ft-ema-original/blob/main/vae-ft-ema-560000-ema-pruned.ckpt). Example (left with 56k, right with anything vae): https://i.4cdn.org/h/1669086238979897s.jpg

not sure if people still use NAI:
NAI to webui translator (not 100% accurate): https://seesaawiki.jp/nai_ch/d/%a5%d7%a5%ed%a5%f3%a5%d7%a5%c8%ca%d1%b4%b9

Outdated guide: https://rentry.co/8vaaa

  • Manually tagging the pictures allows for faster convergence than auto-tagging. More work is needed to see if deepdanbooru autotagging helps convergence

Reduce bias of dreambooth models: https://www.reddit.com/r/StableDiffusion/comments/ygyq2j/a_simple_method_explained_in_the_comments_to/?utm_source=share&utm_medium=web2x&context=3

Landscape tutorial: https://www.reddit.com/r/StableDiffusion/comments/yivokx/landscape_matte_painting_with_stable_diffusion/

Img2img rotoscoping tutorial by anon:

1. extract image sequence from video
2. testing prompt by using the 1st photo from the batch
3. find the suitable prompt that you want, the pose/sexual acts should be the same as the original to prevent weirdness
4. CFG Scale and Denoising Strength is very important
> Low CFG Scale will make your image less follow your prompt and make it more blurry and messy (i use 9-13)
> Denoising Strength determines the mix between your prompt and your image: 0 = Original input 1 = Only Prompt, nothing resemble of the input except the colors.
the interesting thing that i've noticed from Denoising strength is not linear, its behave more exponential ( my speculation is 0-0.6 = still reminds of the original 0.61-0.76 = starting to change 0.77-1 = change a lot )
5. sampler:
> Euler-a is quite nice, but lack of consistency between the step, adding/lower 1 step can change the entire photo
> Euler is better than euler-a in terms of consistency but requires more steps = longer generation time between each image
> DPM++ 2S a Karras is the best in quality (for me) but it is very slow, good for generate single image
> DDIM is the fastest and very useful for this case, 20-30 steps can produces a nice quality anime image.
6. test prompting into a batch of 4-6 to choosing a seed
7. Batch img2img
8. Assembling the generated images into video, i don't want to use eveyframes so i rendered into 2 frame steps and half the frame rate
9. Use Flowframes to interpolate the inbetween frame to match the original video frame rate.

Ex: https://files.catbox.moe/e30szo.mp4

File2prompt (I think it's multiple generations in a row?): https://rentry.org/file2prompt

Enchancement Workflow with SD Upscale and inpainting by anon: https://pastebin.com/8WVyDxt9

Upscaling + detail with SD Upscale: https://www.reddit.com/r/StableDiffusion/comments/xkjjf9/upscale_to_huge_sizes_and_add_detail_with_sd/?context=3

Inpainting a face by anon:

send the picture to inpaint
modify the prompt to remove anything related to the background
add (face) to the prompt
slap a masking blob over the whole face
mask blur 10-16 (may have to adjust after), masked content: original, inpaint at full resolution checked, full resolution padding 0, sampling steps ~40-50, sampling method DDIM, width and height set to your original picture's full res
denoising strength .4-.5 if you want minor adjustments, .6-.7 if you want to really regenerate the entire masked area
let it rip

Animating faces by anon:

workflow looks like this:
>generate square portrait (i use 1024 for this example)
>create or find driving video
>crop driving video to square with ffmpeg, making sure to match the general distance from camera and face position(it does not do well with panning/zooming video or too much head movement)
>run thin-plate-spline-motion-model
>take result.mp4 and put it into Video2x (Waifu2x Caffe)
>put into flowframes for 60fps and webm

>if you don't care about upscaling it makes 256x256 pretty easily
>an extension for webui could probably be made by someone smarter than me, its a bit tedious right now with so many terminals

here is a pastebin of useful commands for my workflow
https://pastebin.com/6Y6ZK8PN

Another person who used it: https://www.reddit.com/r/StableDiffusion/comments/ynejta/stable_diffusion_animated_with_thinplate_spline/

Img2img megalist + implementations: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2940

Runway inpaint model: https://huggingface.co/runwayml/stable-diffusion-inpainting

Inpainting Tips: https://www.pixiv.net/en/artworks/102083584
Rentry version: https://rentry.org/inpainting-guide-SD

External masking for inpainting (no more brush or WIN magnifier): https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script
anon: theres a commanda rg for adding basic painting, its '--gradio-img2img-tool'

Animation stuff

Script collection: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts
Prompt matrix tutorial: https://gigazine.net/gsc_news/en/20220909-automatic1111-stable-diffusion-webui-prompt-matrix/
Animation Script: https://github.com/amotile/stable-diffusion-studio
Animation script 2: https://github.com/Animator-Anon/Animator
Video Script: https://github.com/memes-forever/Stable-diffusion-webui-video
Masking Script: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script
XYZ Grid Script: https://github.com/xrpgame/xyz_plot_script
Vector Graphics: https://github.com/GeorgLegato/Txt2Vectorgraphics/blob/main/txt2vectorgfx.py
Txt2mask: https://github.com/ThereforeGames/txt2mask
Prompt changing scripts:

Interpolation script (img2img + txt2img mix): https://github.com/DiceOwl/StableDiffusionStuff

img2tiles script: https://github.com/arcanite24/img2tiles
Script for outpainting: https://github.com/TKoestlerx/sdexperiments
Img2img animation script: https://github.com/Animator-Anon/Animator/blob/main/animation_v6.py

Google's interpolation script: https://github.com/google-research/frame-interpolation

Deforum guide: https://docs.google.com/document/d/1RrQv7FntzOuLg4ohjRZPVL7iptIyBhwwbcEYEW2OfcI/edit
Animation Guide: https://rentry.org/AnimAnon#introduction
Rotoscope guide: https://rentry.org/AnimAnon-Rotoscope

Prompt travel: https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel

More animation guide: https://www.reddit.com/r/StableDiffusion/comments/ymwk53/better_frame_consistency/
Animation guide + example for face: https://www.reddit.com/r/StableDiffusion/comments/ys434h/animating_generated_face_test/
Something for aninmation: https://github.com/nicolai256/Few-Shot-Patch-Based-Training

Models + mixes, you can find most on Civit and HF now

  • V1 repo: https://github.com/CompVis/stable-diffusion
  • V2 repo: https://github.com/Stability-AI/stablediffusion
    Direct Downloads (no login needed)
    https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt
    https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt
    https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt
    https://huggingface.co/stabilityai/stable-diffusion-2-inpainting/resolve/main/512-inpainting-ema.ckpt
    https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt
    
    License: https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/LICENSE-MODEL
    
    Torrent for the 2.0 release by anon. License text included so that it's okay to distribute.
    
    magnet:?xt=urn:btih:f193c9cb1e542f5ea264dfcaeb014635ddf4b25b&dn=stable-diffusion-2&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    
    Other:
    
    magnet:?xt=urn:btih:cce6f4ee630efadd365fe4f1655c733e0551d50f&dn=stable-diffusion-2-base&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    
    magnet:?xt=urn:btih:ceeb6f68b3b08a4bf6216330a01cecb648ace936&dn=stable-diffusion-2-depth&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    
    magnet:?xt=urn:btih:844341921a40b2202a0d97eb17f749281f6566ec&dn=stable-diffusion-2-inpainting&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    
    magnet:?xt=urn:btih:ab224077b6f04012f1bc5527ef08e50bab1fb1d5&dn=stable-diffusion-x4-upscaler&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    
    "You can check that I didn't pickle these torrents by comparing the .ckpt file's hash against the SHA256 hashes on the official repo. No login needed."
    
    https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/768-v-ema.ckpt
    
  • anything.ckpt (v3 6569e224; v2.1 619c23f0), a Chinese finetune/training continuation of NAI, is released: https://www.bilibili.com/read/cv19603218
from: https://bt4g.org/magnet/689c0fe075ab4c7b6c08a6f1e633491d41186860

another magnet on https://rentry.org/sdmodels from the author

* Mixed SFW/NSFW Pony/Furry V2 from AstraliteHeart: https://mega.nz/file/Va0Q0B4L#QAkbI2v0CnPkjMkK9IIJb2RZTegooQ8s6EpSm1S4CDk

* Mega mixing guide (has a different berry mix): https://rentry.org/lftbl
    * Model showcases from lftbl: https://rentry.co/LFTBL-showcase

* Cafe Unofficial Instagram TEST Model Release
    * Trained on ~140k 640x640 Instagram images made up of primarily Japanese accounts (mix of cosplay, model, and personal accounts)
    * Note: While the model can create some realistic (Japanese) Instagram-esque images on its own, for full potential, it is recommended that it be merged with another model (such as berry or anything)
    * Note: Use CLIP 2 and resolutions greater than 640x640

* Artstation Models (by WD dev): https://huggingface.co/hakurei/artstation-diffusion
    * Prebuilt ckpt (not sure if safe): https://huggingface.co/NoCrypt/artstation-diffusion/tree/main

* Nitro Diffusion (Multi-style model trained on three artstyles, archer style, arcane style, and modern disney style): https://huggingface.co/nitrosocke/Nitro-Diffusion

* High quality anime images (eimisanimediffusion): https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v

*Hrrzg style 768px: https://huggingface.co/TheLastBen/hrrzg-style-768px

* Ghibli Diffusion (tokens: ghibli style): https://huggingface.co/nitrosocke/Ghibli-Diffusion

- Pokemono Diffusers: https://huggingface.co/lambdalabs/sd-pokemon-diffusers/tree/main

* Animus's premium models got leaked (not sure if safe): https://rentry.org/animusmixed
    - Drive links pass the pickle test
    - Linked directly from his patreon: https://www.patreon.com/posts/all-of-my-free-74510576

**MODEL MIXES**

Raspberry mix download by anon (not sure if safe): https://pixeldrain.com/u/F2mkQEYp
Strawberry Mix (anon, safety caution): https://pixeldrain.com/u/z5vNbVYc

``` python
magnet:?xt=urn:btih:eb085b3e22310a338e6ea00172cb887c10c54cbc&dn=cafe-instagram-unofficial-test-epoch-9-140k-images-fp32.ckpt&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80&tr=udp%3A%2F%2Fopentor.org%3A2710&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp%3A%2F%2Ftracker.blackunicorn.xyz%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969

ThisModel:

  1. (Weighted Sum 0.05) Anything3 + SD1.5 = Temp1
  2. (Add Difference 1.0) Temp1 + F222 + SD1.5 = Temp2
  3. (Weighted Sum 0.2) Temp2 + TrinArt2_115000 = ThisModel

Anon's model for vampires(?):

My steps

Step 1:
>A : Anything-V3.0
>B : trinart2_step115000.ckpt [f1c7e952]
>C : stable-diffusion-v-1-4-original

A from https://huggingface.co/Linaqruf/anything-v3.0/blob/main/Anything-V3.0-pruned.ckpt
B from https://rentry.org/sdmodels#trinart2_step115000ckpt-f1c7e952
C from https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt

and I "Add Difference" at 0.45, and name as part1.ckpt

Step 2:
>A : part1.ckpt (What I made in Step 1)
>B: Cafe Unofficial Instagram TEST Model [50b987ae]

B is from https://rentry.org/sdmodels#cafe-unofficial-instagram-test-model-50b987ae

and I "Weighted Sum" at 0.5, and name it TrinArtMix.ckpt

Antler's Mix (didn't check for pickles)
https://mega.nz/file/nZtz0LZL#ExSHp7icsZedxOH_yRUOKAliPGfKRsWiOYHqULZy9Yo

Alternate mix, apparently? (didn't check for pickles)

((anything_0.95 + sd-1.5_0.05) + f222 - sd-1.5)_0.75 + trinart2_115000_0.25

RandoMix2 (didn't check for pickles)
magnet:?xt=urn:btih:AB6A6C3F6AA0858030B9B85D28B243A4FF9F5935&dn=RandoMix2.zip&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

RaptorBerry (didn't check for pickles)
magnet:?xt=urn:btih:166c9caf38801ba4e10912b5c91ccaaec585534c&dn=RaptorBerry%20Final%20Mix.ckpt&tr=http%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce

NAI+SD+Trinart characters+Trinart+F222 (weighted sum, values less than 0.3): https://mega.nz/file/JblSFKia#n8JNfYWXaMeeQEstB-1A1Ju5u3m9I-u-n3WcmVpz2lo

1
2
3
4
5
6
7
"Ben Dover Mix"©®™ is my mix
if you're interested
follow this guide https://rentry.org/lftbl#berrymix
The mix is done exactly the same way as berrymix
but with anythingv3 instead of nai
f222 instead of f111
and sd v1.5 instead of sd v1.4

https://rentry.org/ben_dover

AloeVera mix: https://mega.nz/file/4bEzxB6Q#j3QwgNxHiYOmT8Y4OgHP9mlzvFbCkEK1DUepMoIBI50

Nutmeg mix:

1
2
3
4
5
0.05 NAI + SD1.5
0.05 mix + f222
0.05 mix + r34
0.05 mix + SF
0.3 Anything + mix

Hyper-versatile SD model: https://huggingface.co/BuniRemo/Redshift-WD12-SD14-NAI-FMD_Checkpoint_Merger_-_Hyper-Versatile_Stable_Diffusion_Model

  • Made from Redshift Diffusion, Waifu Diffusion 1.2, Stable Diffusion 1.4, Novel AI, Yiffy, and Zack3D_Kinky-v1; capable of rendering humans, furries, landscapes, backgrounds, buildings, Disney style, painterly styles, and more

Hassan (has a few mixes, not sure if the dls are safe): https://rentry.org/sdhassan

Anonmix:

Weighted Sum @ 0.05 to make tempmodel1

A: Anything.V3, B: SD1.5, C: null

Add Difference @ 1.0 to make tempmodel2

A: tempmodel1, B: Zeipher F222, C: SD1.5

Weighted Sum @ 0.25 to make tempmodel3

A: tempmodel2, B: r34_e4, C: Null

Weighted Sum @ 0.20 to make FINAL MODEL

A: tempmodel3, B: NAI

SquidwardBlendv2: https://mega.nz/file/r9MzkQxb#QeiM9HJf0wAw68yg-q9RtrbGucB7h712yu-EpvFX3n0

Big collection of berry mixes: https://rentry.org/dbhhk (https://archived.moe/h/thread/6984678/#q6985842)

Super duper mixing cookbook from hdg (most updated): https://rentry.org/hdgrecipes

Dreambooths

Embeddings

Found on 4chan:

Hypernetworks

Hypernetwork Dump: https://gitgud.io/necoma/sd-database
Collection: https://gitlab.com/mwlp/sd
Another collection: https://www.mediafire.com/folder/bu42ajptjgrsj/hn

Found on 4chan:

Found on Discord:

Colored eyes:

>Released by IWillRemember#1912
>I'm releasing an Hn to do better animation like glowing eyes, and a more slender face/upper body.
>
>The tags are : 
>detailed eyes, 
>(color) eyes  = ex: white eyes, blue eyes, etc etc
>collarbone
>
>Trained for 12k steps on a 80 ish images dataset
>
>You can use the Hn with a str of 1 without any problem.
>
>Happy prompting!
>
>Example: https://media.discordapp.net/attachments/1023082871822503966/1038115846222008392/00162-3940698197-masterpiece_highest_quality_digital_art_1girl_on_back_detailed_eyes_perfect_face_detailed_face_breasts_white_hair_yell.png?width=648&height=702
>
>https://mega.nz/file/dHFwmaxS#NQhMPjT4TElPXX_YAZhTsFrQ36PDJhpWFm9BcHU_BO4

Aesthetic Gradients

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Polar Resources

Datasets

Datasets:

Old(?) Training Info

From anon: For sigmoid/inverse sigmoid interpolation between modesl, add this code starting with line 38 of merge.py:

for key in tqdm(theta_0.keys(), desc="Stage 1/2"):
    if "model" in key and key in theta_1:
        # sigmoid
        alpha = alpha * alpha * (3 - (2 * alpha))
        theta_0[key] = theta_0[key] + ((theta_1[key] - theta_0[key]) * alpha)

        # inverse sigmoid
        #alpha = 0.5 - math.sin(math.asin(1.0 - 2.0 * alpha) / 3.0)
        #theta_0[key] = theta_0[key] + ((theta_1[key] - theta_0[key]) * alpha)

        # Weighted sum
        #theta_0[key] = ((1 - alpha) * theta_0[key]) + (alpha * theta_1[key])

Supposedly how to append model data without merging by anon:

x = (Final Dreambooth Model) - (Original Model)
filter x for x >= (Some Threshold)
out = (Model You Want To Merge It With) * (1 - M) + x * M

Model merging method that preserves weights: https://github.com/samuela/git-re-basin

Alternate model merging using https://github.com/bmaltais/dehydrate by anon:

Dehydrate a model
Hydrate it back into a dreambooth
Merge with other stuff
run python ckpt_subtract.py dreamboothmodel.ckpt basemode.ckpt --output dreambooth_only to dehydrate
run 'python ckpt_add.py dreambooth_only target_model.ckpt --output output_model.ckpt' to hydrate it into another model.

3rd party git re basin:
https://github.com/ogkalu2/Merge-Stable-Diffusion-models-without-distortion

Git rebasin pytorch: https://github.com/themrzmaster/git-re-basin-pytorch

  • Aesthetic Gradients: https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients
  • Image aesthetic rating (?): https://github.com/waifu-diffusion/aesthetic
  • 1 img TI: https://huggingface.co/lambdalabs/sd-image-variations-diffusers
  • You can set a learning rate of "0.1:500, 0.01:1000, 0.001:10000" in textual inversion and it will follow the schedule
  • Tip: combining natural language sentences and tags can create a better training
  • Dreambooth on 2080ti 11GB (anon's guide): https://rentry.org/tfp6h
  • Training a TI on 6gb (not sure if safe or even works, instructions by uploader anon): https://pastebin.com/iFwvy5Gy
    • Have xformers enabled.

      This diff does 2 things.

      1. enables cross attention optimizations during TI training. Voldy disabled the optimizations during training because he said it gave him bad results. However, if you use the InvokeAI optimization or xformers after the xformers fix it does not give you bad results anymore.
        This saves around 1.5GB vram with xformers
      2. unloads vae from VRAM during training. This is done in hypernetworks, and idk why it wasn't in the code for TI. It doesn't break anything and doesn't make anything worse.
        This saves around .2 GB VRAM

      After you apply this, turn on Move VAE and CLIP to RAM and Use cross attention optimizations while training

  • By anon:

    No idea if someone else will have a use for this but I needed to make it for myself since I can't get a hypernetwork trained regardless of what I do.

    https://mega.nz/file/LDwi1bab#xrGkqJ9m-IsqsTQNixVkeWrGw2HvmAr_fx9FxNhrrbY

    That link above is a spreadsheet where you paste the hypernetwork_loss.csv data into A1 cell (A2 is where numbers should start). Then you can use M1 to set how many epochs of the most recent data you want to use for the red trendline (green is the same length but starting before red). Outlayer % is if you want to filter out extreme points 100% means all points are considered for trendline 95% filters out top and bottom 5 etc. Basically you can use this to see where the training started fucking up.

  • Anon's best:

    Creation:
    1,2,1
    Normalized Layers
    Dropout Enabled
    Swish
    XavierNormal (Not sure yet on this one. Normal or XavierUniform might be better)

Training:

Rate: 5e-5:1000, 5e-6:5000, 5e-7:20000, 5e-8:100000
Max Steps: 100,000

Vector guide by anon: https://rentry.org/dah4f

  • Another training guide: https://www.reddit.com/r/stablediffusion/comments/y91luo
  • Super simple embed guide by anon: Grab the high quality images, run them through the processor. Create an embedding called art by {artist}. Then train that same embedding with your processed images and set the learning rate to the following:0.1:500,0.05:1000,0.025:1500,0.001:2000,1e-5` Run it for 10k steps and you'll be good. No need for an entire hypernetwork.
  • Has training info and a tutorial for Asagi Igawa, Edjit, and Rouge the Bat embeds (RealYiffingFar#4510): https://mega.nz/folder/5nIAnJaA#YMClwO8r7tR1zdJJeTfegA
  • Anon's dreambooth guide:
    for a character, steps ~1500-2000
    checkpoint every 500 if you have the VRAM for it, else 99999 (ie: at the end), previews are shit don't even bother, 99999
    learning rate: 0.000001-0.000005, I don't have a reason for it, default is probably fine.
    instance prompt: [filewords], class prompt: 1girl, 20x regularisation images than training images, style matters, if you want anime get anime regularisation stuff.
    advanced: auto-adjust, batch size: 2, 8bit adam, fp16, don't cache latents (noticeable speedup if you do cache), train text, train EMA, gradient checkpointing, 2 gradient accumulation

none of this is concrete stuff I do every time, I just roll whatever works. the single most important stuff is to ensure you never tag anything that isn't in an image after cropping.
reduce the tags as much as humanly possible, ie:

legwear, black thighhighs, long socks, long thighhighs, pantyhose, stockings, etc.

to just:

thighhighs

try add images that both do and do not use all of your tags. if you have a pic with thighhighs, include at least one without, otherwise the tag is meaningless
if your training cannot establish a positive and negative for each tag it's gonna struggle to recall those features
have makima with yellow eyes? include some girl with similar features but red or blue eyes, or just an entirely different girl that's been accurately tagged with the negatives you need
in this way you can distinguish between features and emphasise stuff.

Training dataset with aesthetic ratings: https://github.com/JD-P/simulacra-aesthetic-captions

random training stuff that was posted months ago

Dreambooth thing in Japanese: https://note.com/kohya_ss/n/nee3ed1649fb6

  • "Has aspect ratio bucketing, saving in fp16, etc."

GPU seems to determine training results (--low/med vram arg too)

Official pytoch implementation of one shot text to image generation via contrastive prompt-tuning AKA 1 image embedding training: https://github.com/7eu7d7/DreamArtist-stable-diffusion
Extension: https://github.com/7eu7d7/DreamArtist-sd-webui-extension
DreamArtist extension changes ui.py code in the modules directory, which might not be safe

reddit stuff posted months ago:

Edit
Pub: 23 Nov 2022 18:12 UTC
Edit: 04 May 2023 17:57 UTC
Views: 2206