/trash/ MEGA-Megacollection (WIP)

Splitting the Rentry

We did it guys
Character Limit exceeded

I have moved the main LoRA section off to a new rentry:
https://rentry.org/trashcollects_loras

Character Rentry

Huge, seperate rentry maintained by another anon, which I think are mostly his own LoRAs even. While I mostly just collect links, that anon curates example images to go along with his LoRAs.
Comes with an edit code to make changes to the rentry, so if you have LoRAs you want to share in the thread, or have some example images for LoRAs posted here on collects, feel free to add new entries to the two rentries.

Part 1: https://rentry.org/c6nt3cnh
Part 2: https://rentry.org/5kdhna5w

PonyDiffusion6 XL artist findings

https://lite.framacalc.org/4ttgzvd0rx-a6jf
Putting it here for better visibility and until I figure out where else to put it.
The author of PonyDiffusion V6 obfuscated artist names before training, which as it turns out seem to be random three letter combinations.
/h/ has been spending some time trying out various combinations, and have been codumenting their findings in the spreadsheet above. If you try some combinations and recognize the artist behind them, feel free to contribute.

Use the base PonyDiffusion V6 XL checkpoint, not DPO, autismmix or similar. They most certainly will have similar influences at the very least, but for the sake of comparison you might want to stick to the above.
DDL: https://civitai.com/api/download/models/290640?type=Model&format=SafeTensor&size=pruned&fp=fp16
Mirror: https://pixeldrain.com/u/UPddM9ez
If you haven't used an SDXL model before, you will need a different VAE for them
VAE: https://civitai.com/api/download/models/290640?type=VAE&format=SafeTensor
Mirror: https://pixeldrain.com/u/kzu1x1u8

Also, remember to change Clip Skip to 2.

I'm just gonna link y'all furries all the compilations so far so you can have an easier time looking:

aaa to bzm:
https://files.catbox.moe/c0rl1r.jpg
cad to eum:
https://files.catbox.moe/ewr0s0.jpg
evg to hns:
https://files.catbox.moe/tda4ir.jpg
hpb to jki:
https://files.catbox.moe/44a2rc.jpg
jkv to lek:
https://files.catbox.moe/tj2aeq.jpg
lgu to mkb:
https://files.catbox.moe/p7qqaz.jpg
Here's mkg to nyj:
https://files.catbox.moe/365n8h.jpg
and nyp to pyb:
https://files.catbox.moe/64zi6v.jpg

Less consistent:
aav to frw:
https://files.catbox.moe/1z4efd.jpg
fsp to klm:
https://files.catbox.moe/jnjwqi.jpg
kmq to ojn:
https://files.catbox.moe/6qpxyi.jpg
oka to rrg:
https://files.catbox.moe/hvy7re.jpg

Models

This list of models has been added to over the course of more than a year now. Therefore, most models at the top of this list are old. Start at the bottom of the "Models" section, or better yet, check out the current list of recommended models over in /trash/sdg/, then check back and look them up via CTRL+F.

Base SD 1.5
.ckpt: https://pixeldrain.com/u/HQBAmpyD

Easter e17
.ckpt: https://mega.nz/file/Bi5TnJjT#Iex8PkoZVdBd3x58J52ewYLjo-jn9xusnKAhyuNtU-0
.safetensors: https://pixeldrain.com/u/PJUjEzAB

Yiffymix (yiffy e18 and Zeipher F111)
Yiffymix: https://civitai.com/api/download/models/4053?type=Model&format=SafeTensor&size=full&fp=fp16
Yiffymix recommended vae: https://civitai.com/api/download/models/4053?type=VAE&format=Other

Yiffymix 2 (based on fluffyrock-576-704-832-lion-low-lr-e16-offset-noise-e1) (Use Clip Skip 1)
Yiffymix 2: https://civitai.com/api/download/models/40968?type=Model&format=SafeTensor&size=pruned&fp=fp32
YiffyMix2 Species/Artist Grid List [FluffyRock tags]: https://mega.nz/folder/UBxDgIyL#K9NJtrWTcvEQtoTl508KiA/folder/YNhymCLY

YiffAnything
Merge of Yiffy and Anything, posts from the archives indicate that the hash from the one below vary from the usual hash? Either way, you can either download the one below or merge it yourself: Select Yiffy in box A, AnythingV3 in B, the leaked novelai anime model in C, multiplier is 1, interpolation is add difference
https://pixeldrain.com/u/QxV5FMjc

Explanation as stated by anon:
>Reports stated that the hash is different from what is expected
automatic1111 changed the hashes a couple weeks ago, they used to be 8 characters long ever since they were introduced, they're 10 characters long now, which are just the first 10 characters of the sha256 hash
every old link you'll find mentioning a hash will show a different hash in the webui now

7th_furry tests (seem to be merges of the 7th layer models)
7th furry testA: https://huggingface.co/syaimu/7th_furry/resolve/main/7th_furry_testA.ckpt
testB: https://huggingface.co/syaimu/7th_furry/resolve/main/7th_furry_testB.ckpt
testC: https://huggingface.co/syaimu/7th_furry/resolve/main/7th_furry_testC.ckpt

NovelAI Leak + VAE
https://pixeldrain.com/u/rWQ9wQmk

NAI Hypernetworks
https://pixeldrain.com/u/BRh8qfJM

AnythingFurry
AnythingFurry

Safetensor: https://civitai.com/api/download/models/5927?type=Model&format=SafeTensor&size=full&fp=fp16
Config (yaml) file, put it in your model folder next to the above: https://civitai.com/api/download/models/5927?type=Config&format=Other

Lawlas's Yiffymix 1 and 2
2 has been merged with AOM3 and other anime models, hence the described need for high weighting of furry. I personally prefer 1, but try both and see what you like more.
The mentioned embeddings are on huggingface. Easy_negatives is on CivitAI, but shouldn't need an account.

Version 1

Description: Hello and welcome. This is my custom furry model mix based on yiffy-e18.
It's able to produce sfw/nsfw furry anthro artworks of different styles with consistant quality, while maintaining details on stuff like clothes, background, etc. with simpler prompts.

I personally use it with these settings:

CFG: 6-8

steps: 23-150

Size: 512 x 704 or 578 x 768 (then upscale it. Some of the example images here are upscaled.)

Sampler: DPM++ 2M Karras or Euler a
(Clip skip 1 is recommended. 2 also works but makes the style different)
For better results, it's recommended to use it with [anything v4 vae file](https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0.vae.pt) or you can use versions with baked-in VAE.
Starting the positive prompt with "uploaded on e621" and negative prompt with"(worst quality, low quality:1.4)" help too.

PS: Consider using bad-artist embedding or boring_e621 as well. it's optional to use bad-artist embedding and you can try other textual inversions as well. If you see this, don't be an idiot like me. the way to use bad artist embedding is by adding bad-artist instead of bad_artist and you'll see it doesn't nessesarily improve the quality of the result. Boring_e621 is overall more recommended.

fp 16 baked-in vae (no vae needed, if you want to use your own do not download this one): https://civitai.com/api/download/models/15584?type=Model&format=SafeTensor&size=full&fp=fp16
fp16 no vae pruned: https://civitai.com/api/download/models/5370?type=Model&format=SafeTensor&size=full&fp=fp16

Version 2

Lawlas Yiffymix v1

1
2
3
Description: Hello and welcome. This model is an upgrade from my previous model Lawlasmix. I used AOM3 and miscellaneous models during the making of the model, so don't forget to check that out.

It's capable of generating sfw/nsfw furry artworks in general with consistent quality, while having decent details in hands, clothing, background, etc. Compared with the old Lawlasmix, it's overall more anime style oriented, and doesn't need artist tags all the time (It can still change the styles if you use them anyways). It can be tricky to write prompts for the model if you're new to it, so make sure you read the tips carefully!

▼Tips

Personally I use it with these settings:

Batch count: 6

Clip skip: 1 or 2 (for different styles)

CFG: 6-8

steps: 40-150

Size: 512 x 704 or 512 x 780 .etc (then use hiresfix, or SD upscale in img2img )

Sampler: DPM++ 2M Karras or Euler a

Prompting

The model can make amazing results with very simple prompts so long as you use it right. It uses both danbooru and e621 tags, like 1boy, 1girl, solo_focus. It's recommended to start the positive prompt with "(furry art, uploaded on e621:1.4)" and negative prompt with "(worst quality, low quality:1.4)". Since the merge has significant amont of anime models in it, you need to give more weight to furry related tags. Use tags like (anthro furry:1.6) in the positive prompts and (human:1.6) in order to get it to make furry art. The model performs rather well when you specify the color of the fur. For instance, (detailed red scales:1.5), (detailed blue fluffy fur:1.5). Feel free to check out the example images for their prompts to help you write your own.

VAE

If you don't use any vae with the original version, the outputs of the model may suffer from loss of colors. As a result, consider using anythingv3 vae or orangemix vae. I provide the download of orangemix vae here. However, if you don't want to use any VAEs or can't get them to load, you can now choose the versions with a VAE baked in the model.

Embedding

Feel free to use other textual inversions, but I strongly recommend boring_e621. It's trained specifically on furry artwork and works great in this case. Try EasyNegative as well

Known problem(s):

1
2
3
Since the has AOM3 mixed in it, it's very common for it to generate results in which characters have "M" shaped hair. Consider using tags like (unique hairstyle, unique fringe, unique bangs:1.6) or using img2img to generate alternative images to help you get to the desired results.

The model has reported to be temperamental so to speak. I suggest setting batch count to more than 2 so you have more results to choose from.

▼Credits:

Here are models used as far as I can remember:

1
2
3
4
5
AOM3

DivineEleganceMix

My previous model

I apologize for not keeping a record of the models I used. Without their amazing work, this model wouldn't have even existed. Kudos to every creator on this site!

1
2
3
baked-in vae: https://civitai.com/api/download/models/15460?type=Model&format=SafeTensor&size=full&fp=fp16
no vae: https://civitai.com/api/download/models/15288?type=Model&format=SafeTensor&size=full&fp=fp16
orangemix.vae.pt: https://civitai.com/api/download/models/15288?type=VAE&format=Other

AbyssOrangeMix2 (for those without a Huggingface account)
AOM2

AbyssOrangeMix2_sfw.safetensors

https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors

AbyssOrangeMix2_nsfw.safetensors

https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_nsfw.safetensors

AbyssOrangeMix2_hard.safetensors

https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors

AOM VAE (rename it the same as the AOM model you use)

https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt

Frankenmodels (Yttreia's Merges)
https://drive.google.com/drive/folders/1kQrMDo2AtzcfAycGhI79M2YnPUHebu6M

Explanation:
The filename is the recipe. Minus symbols are averaged, plus symbols are added.
Tried avoiding any models that need special VAEs.
Uh, no real comments otherwise, my Twitter is https://twitter.com/Yttreia

Gay621 v0.5
https://civitai.com/api/download/models/12262?type=Model&format=PickleTensor&size=full&fp=fp16

Based64 Mix
https://pixeldrain.com/u/khSK5FBj

Crosskemono (CivitAI, links last updated: 03/28/2023)
Crosskemono_F

Crosskemono Description

VAE: https://civitai.com/api/download/models/14048?type=VAE&format=Other
furry_kemono.pt (Hypernetwork, should be the same one as the one from the NAI leak): https://civitai.com/api/download/models/17114?type=Model&format=PickleTensor&size=full&fp=fp16
CrosskemonoA: https://civitai.com/api/download/models/14048?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoB: https://civitai.com/api/download/models/14047?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoC: https://civitai.com/api/download/models/14352?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoD: https://civitai.com/api/download/models/14575?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoE: https://civitai.com/api/download/models/19806?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoE_2: https://civitai.com/api/download/models/20242?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoF: https://civitai.com/api/download/models/17113?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoG: https://civitai.com/api/download/models/22259?type=Model&format=SafeTensor&size=full&fp=fp16
CrosskemonoG_2: https://civitai.com/api/download/models/22919?type=Model&format=SafeTensor&size=full&fp=fp16

Crosskemono 2 (with added E621 Tag support)
Corsskemono 2TEST

1
2
3
4
5
6
Same VAE as V1, see above. TEST features e621 tags, TEST_2 Booru tags, Full both types.
2.5 has added Noise Offset for brighter brights and darker darks - keep this in mind if you want to use it for any merges.
Crosskemono 2.0TEST: https://civitai.com/api/download/models/27823?type=Model&format=SafeTensor&size=full&fp=fp16
Crosskemono 2.0TEST_2: https://civitai.com/api/download/models/28447?type=Model&format=SafeTensor&size=full&fp=fp16
Crosskemono 2.0 Full: https://civitai.com/api/download/models/32830?type=Model&format=SafeTensor&size=full&fp=fp16
Crosskemono 2.5: https://civitai.com/api/download/models/47368?type=Model&format=SafeTensor&size=full&fp=fp16

If you make some cool gens with these, feel free to post them over on the Crosskemono CivitAI page and leave a rating - the author machine-translated his way onto /trash/ to ask for feedback and examples, and is bummed out he barely gets any feedback about what people think about the model over on CivitAI.

The author ITTThe author ITT 2, after I added the above note

Crosskemono 3
https://www.seaart.ai/models/detail/ac3c26ac1ff19c18dc840f3b8e162c25
https://pixeldrain.com/u/3YXax5DP

PC 98 Model
https://mega.nz/file/uJkQBbKL#qVI95nOJkkMAjPQXBsvPZA9bTSaF5gOv0IA0XCjdE2E

Low-Poly
mega.nz/file/PAcABRrS#tFCWwWyyatquNvrzLIUqPkrpYJhsS9nEjpY0mv4SNKM

Fluffusion
Fluffusion Protoype r10 e7 640x

Direct DL link from CivitAI for r1 e20: https://civitai.com/api/download/models/80182?type=Model&format=SafeTensor&size=pruned&fp=fp16

Seperate Fluffusion rentry maintained by the model author (?): https://rentry.org/fluffusion

Below are links to the Prototype r10 e7 model; you probably won't need it, but it's here for posterity.

1
2
3
4
5
Prototype r10 e7 640x
.ckpt: https://pixeldrain.com/u/BiRcb1bH
.safetensors: https://pixeldrain.com/u/f9Le5J9P
e621 tags with post counts used in Fluffusion (could use it for looking up stuff, making wildcards or for use in Tag Autocomplete, put it in \extensions\a1111-sd-webui-tagcomplete\tags for that):
https://pixeldrain.com/u/nykiAMAi

REVFUR
REVFUR

REVFUR: https://easyupload.io/a72yws

Fluff_Proto Merges

0.7(revAnimated_v11) + 0.3(fluff_proto_r10_e7_640x): https://easyupload.io/0eqrwu

Revfluff : https://pixeldrain.com/u/KJ1TKS26
0.8 (fluff_proto_r10_e7_640x) + 0.2( revAnimated_v11)

fluff-koto: https://pixeldrain.com/u/mMFsR6Ez DEAD
0.75 (fluff_proto_r10_e7_640x) + 0.25( kotosmix_v10)

Tism Prism (Sonic characters)
https://archive.org/details/tism-prism-AI

Fluffyrock
What the fuck do all these models MEAN!? (Taken from the Discord on August 27th)

MAIN MODELS (For these, typically download the most recent one):
e6laion (Combination dataset of LAION and e621 images, can do realism rather well. Currently is being trained with vpred, so will require a YAML file, and for optimal performance needs CFG rescale too.
uses Lodestone's 3M e621 dataset, almost the entire website)
fluffyrock-1088-megares-offset-noise-3M-SDXLVAE (Offset-noise version of the model using the 3M dataset, but experimenting with Stable Diffusion XL's VAE. Will produce bad results until more training happens.)
fluffyrock-1088-megares-offset-noise-3M (Offset-noise version of the model using the 3M dataset, without any fancy vpred, terminal snr, or a different VAE. Fairly reliable, but may not produce as good results as others.
fluffyrock-1088-megares-terminal-snr-vpred (Vpred and terminal snr version of the model using the 3M dataset. Requires a YAML file to work, and is recommended to install the CFG rescale extension for optimal results.)
fluffyrock-1088-megares-terminal-snr (Terminal snr version of the model using the 3M dataset. Fairly plug and play, but is some epochs behind compared to the others.)
fluffyrock-NoPE (Experimental model to try and remove the 75-token limit of Stable Diffusion by removing positional encoding. Uses Vpred, so will require a YAML file, and again, use it with CFG rescale for optimal performance.
OUTDATED MODELS:
csv-dump
fluffyrock-1088-megares
fluffyrock-2.1-832-multires-offset-noise
fluffyrock-2.1-832-multires
fluffvrock-832-multires-offset-noise
fluffvrock-832-multires
fluffyrock-1088-megares-offset-noise
old-768-model
old-adam-832-model
old-experimental-512-model
old-experimental-640-model
OTHER REPOS:
Polyfur: e6laion but with autocaptions, so should improve at natural language prompts. Vpred + terminal SNR, will require a YAML and should use CFG rescale
Pawfect-alpha: 500k images from FurAffinity. Vpred and terminal SNR, so will require YAML and should use CFG rescale.

Artist comparison: https://files.catbox.moe/rmyw4d.jpg
Repository (GO HERE FOR DOWNLOADS): https://huggingface.co/lodestones/furryrock-model-safetensors
CivitAI page: https://civitai.com/models/92450
Artist study: https://pixeldrain.com/l/caqStmwR DEAD
Tag Autocomplete CSV: https://cdn.discordapp.com/attachments/1086767639763898458/1092754564656136192/fluffyrock.csv

Crookedtrees (Full Model)
Use crookedtrees in your prompt

https://mega.nz/file/bd9mXRRR#EHMQz-Z2eVZ8t0E7dggXDuulhZJ7M71oHWPbYTYgSsc

0.3(acidfur_v10) + 0.7(0.5(fluffyrock-576-704-832-960-1088-lion-low-lr-e22-offset-noise-e7) + 0.5(fluffusion_r1_e20_640x_50))

https://mega.nz/file/BXEywRzA#ytZAXFZDNmCu54rDQQKKGJdAXIOqEPw6_W9BH4DxKW4

Monstermind (Style)

Use mmind and argon_vile in your prompt, on_back, high-angle_view, etc. Ghost_hands and disembodied_hand are hit or miss.

https://mega.nz/file/3AsRDJgZ#-owGxsdtLFYjFv43H6VysBMO88Vk3SKtLhF7mHdC03I

BB95 Furry Mix

1
2
3
4
V7.0: https://civitai.com/api/download/models/82523?type=Model&format=SafeTensor&size=full&fp=fp16
V9.0: https://civitai.com/api/download/models/105881?type=Model&format=SafeTensor&size=full&fp=fp32
V10.0: https://civitai.com/api/download/models/119411?type=Model&format=SafeTensor&size=pruned&fp=fp16
V14.0: https://civitai.com/api/download/models/397456?type=Model&format=SafeTensor&size=pruned&fp=fp16

v14.0 : This version improves fur and will be great to generates better bodies

This version has a baked in VAE. You don't need to download the VAE files

V10.0 RELEASED This version can generates at higher resolution than v9 with a less mistakes. More realistic, better fur, better clothes, better NSFW !

This version has a baked in VAE.

Please consider supporting me so I can continue to make more models --> https://www.patreon.com/BB95FurryMix

Don't forget to join the Furry Diffusion discord server --> https://discord.gg/furrydiffusion

Since v3, this model uses e621 tags.

This model is a mix of various furry models.

It's doing well on generating photorealistic male and female anthro, SFW and NSFW.

I HIGHLY recommend to use Hires Fix to have better results

Below is an example prompt for the v7/v6/v5/v4/v3/v2.

Positive:

anthro (white wolf), male, adult, muscular, veiny muscles, shorts, tail, (realistic fur, detailed fur texture:1.2), detailed background, outside background, photorealistic, hyperrealistic, ultradetailed,

Negative:

I recommend to use boring_e621, you can add bad-hands-v5 if you want

Settings :

Steps: 30-150

Sampler: DDIM or UniPC or Euler A

CFG scale: 7-14

Size: from 512x512 to 750x750 (only v4/v5/v6/v7)

Denoising strength: 0.6

Clip skip: 1

Hires upscale: 2

Hires steps: 30-150

Hires upscaler: Latent (nearest)

Furtastic V2.0

Description: https://files.catbox.moe/cr137n.png

Checkpoint: https://civitai.com/api/download/models/84134?type=Model&format=SafeTensor&size=pruned&fp=fp16
negative embeddings: https://civitai.com/api/download/models/84134?type=Training%20Data

Put embeddings in \stable-diffusion-webui\embeddings, and use the filenames as a tag in the negative prompt.

EasyFluff

Description: EasyFluff V9
A Vpred model, you need both a safetensor and an accompanying yaml file. See here for more info: https://rentry.org/trashfaq#how-do-i-use-vpred-models

https://huggingface.co/zatochu/EasyFluff/tree/main

What are Fun/Funner Editions?

Tweaked UNET with supermerger adjust to dialback noise/detail that can resolve eye sclera bleed in some cases.
Adjusted contrast and color temperature. (Less orange/brown by default)
CLIP should theoretically respond more to natural language. (Don't conflate this with tags not working or having to use natural language. Also it is not magic, so don't expect extremely nuanced prompts to work better.)
FunEdition and FunEditionAlt are earlier versions before adjusting the UNET further to fix color temperature and color bleed. CLIP on these versions may be less predictable as well.

Indigo Furry mix
https://civitai.com/models/34469?modelVersionId=167882
Various different model mixes with varying styles.

All models are baked with VAE, but you can use your own VAE.

Of the many versions uploaded, I will provide direct links to the following recommended models (as of Feb 17th 2024):

Test V-Pred Model:

1
2
3
4
5
6
7
8
This is SE01_vpred, a test v-prediction model, its similar to hybrid models but with higher color saturation and contrast, it can do almost pure black images xd. It is also more stable, does better tails than hybrid models(but not always xd). It seems to be more versatile in styles, too. 
Personally I think using this model is very weird compared to hybrid models(I'm still not getting used to vpred models xd), and sometimes it has less details than hybrid models, also some people may not like its crazy contrast and high color concentration xd.
Remember to download .yaml config file and place it alongside the model files, rename the  config file name to be the same as the model name.
Try not to use boring_e621_fluffyrock_v4 with this model plz bc it may blur the image outputs.
Use CFG rescale extension plz, with a value of 0-0.5(but I think its ok to not use it xd)

DL: https://civitai.com/api/download/models/299485?type=Model&format=SafeTensor&size=pruned&fp=fp16
YAML: https://civitai.com/api/download/models/299485?type=Config&format=Other

Hybrid/General Purpose:

v105:

1
2
3
4
5
6
7
8
This is a tweaked model similar to v90, compared with v90, its colors are more vivid, has more details, and the realistic style is better, but it may not be as good as v90 when doing some certain images xd
Hybrid models are basically Fluffyrock models with better details and (often) weaker NSFW abilities compared with original rock models. They are the most versatile models in this series of models, can do most content and styles.
Use e621 tags, use less danbooru tags plz.
When using hybrid models, add artists to prompts plz.
Use embeddings as negative prompt plz, but you dont need to use a lot of them xd
Clip skip = 1 (try not to use 2 plz).

DL: https://civitai.com/api/download/models/274308?type=Model&format=SafeTensor&size=pruned&fp=fp16

v90:

Based on v75_extra, v80, v85, and yiffymix34.
This is a test model with traindiff, but it should be better than v75? (not much xd)
Nothing much to say about hybrid models, they are basically Fluffyrock models with better details and (often) weaker NSFW abilities compared with original rock models. They are the most common models in this series of models, can do most content and styles.
Each version of the hybrid model is actually not that different (i admit that I pursued the quantity of models but ignored the quality xd).
Use e621 tags, not danbooru tags!!!
(Recommended) add artists to prompts.
(Optional) you can use WD-KL-F8-Anime2 vae to get more colorful images.
Clip skip = 1.

DL: https://civitai.com/api/download/models/209164?type=Model&format=SafeTensor&size=pruned&fp=fp16

v75:

1
2
3
4
This model is probably a combination of v45 and v60.
Note that hybrid models are common models that can do many different styles by artist names, make sure to add artists to prompts. Clip skip = 1.

DL: https://civitai.com/api/download/models/167882?type=Model&format=SafeTensor&size=pruned&fp=fp16

v45:

1
2
3
4
5
6
This is basically a mix of all my previous models with fluffyrock, it balanced style and stability,should be able to be used as a general model.
It can do both anime and realistic content, but I think it's more realistic.
Note that in some scenarios, there is not as much details in generated images as those  specialized anime/realistic models.
Should be ok with all LoRAs. Clip skip = 1 or 2. Using e621 tags, danbooru tags, also phrases.

DL: https://civitai.com/api/download/models/109229?type=Model&format=SafeTensor&size=pruned&fp=fp16

Anime:

v100:

1
2
3
4
5
Here we go again: this is v100_anime, a very similar tweaked version of v85, this model is basically v85 with flatter color, this model is as stable/unstable as v85, and the hands are still bad :(
This is another average model xd
Clip skip = 1 or 2.

DL: https://civitai.com/api/download/models/261878?type=Model&format=SafeTensor&size=pruned&fp=fp16

v85:

1
2
3
4
5
6
Based on v70, v75 and indigokemonomix beta.
This is an alternative version of v70, its like v70_nsfw, which is better at doing nsfw than v70, but losing anime style and may not be that crisp clear as v70.
Could be a little unstable (bad hands are the biggest enemy to anime models), images may be dim, too yellow, and not very colorful.
Clip skip = 1 or 2.

DL: https://civitai.com/api/download/models/202149?type=Model&format=SafeTensor&size=pruned&fp=fp16

v70:

1
2
3
4
5
6
7
Based on v60, cetusWhaleFall2, and nijijourney loras, and a background scene lora.
This model is probably a combination of v55_SFW and v55_NSFW, maybe more SFW.
Trained with Nijijourney images, could probably do a lot of NJ anime styles.
Lighting is more natural according to a friend xd. Could handle very dark images. Unstable NSFW, this model is more about looking good xd.
Note that doing NSFW is unstable, can only do humanoid penises, sometimes the shape of characters penises will be weird xd. Clip skip = 1 (or rarely 2).

DL: https://civitai.com/api/download/models/163168?type=Model&format=SafeTensor&size=pruned&fp=fp16

v55:

1
2
3
4
5
NSFW: This is a NSFW model, which is better at making NSFW content, but it may not be as good as the Hybrid model. Also, compared to v55_sfw, this version has fewer details. Based on v45, meina mix and niji loras. Clip skip 1 or 2.
SFW: This is a SFW model, which is better at making SFW content, it is more flat in style than the nsfw version. Also this version is more niji. Can do nsfw but unstable. Based on v45, meina mix and niji loras. Clip skip 1 or 2.

DL NSFW: https://civitai.com/api/download/models/141821?type=Model&format=SafeTensor&size=pruned&fp=fp16
DL SFW: https://civitai.com/api/download/models/141820?type=Model&format=SafeTensor&size=pruned&fp=fp16

Realistic:

v110:

1
2
3
4
This time the version v110 pays more attention to versatility rather than photorealistic style, compared with v80/v95, it is less photorealistic (but in some cases it is possible to do something very photorealistic), will react to artists tags (but it may not be able to completely replicate the artist styles). It's like a realistic version of hybrid models.
May not be as stable as hybrid models, and it's not that versatile as hybrid models.

DL: https://civitai.com/api/download/models/328557?type=Model&format=SafeTensor&size=pruned&fp=fp16

v95:

1
2
3
4
5
This is a tweaked version of v80 with a little difference. It is similar to v80, with even more fur, brighter colors and lower contrast (so that this model will not look so dark fantasy like v80 xd).
But this version has fewer details, losing photorealistic styles, feels less stable than v80, also there may be too much fur that sometimes dragons/aquatics will have fur xd.
Personally I think this version is quite average :(

DL: https://civitai.com/api/download/models/242885?type=Model&format=SafeTensor&size=pruned&fp=fp16

v80:

1
2
3
4
5
Based on v65, v50, and bm lora.
An improved version of v65, it might be better than v65 imo, (70% of the outputs are better xd) but may lose some photorealistic style.
This version probably solved the problem that the character's body is not completely covered with fur. (maybe solved, may be not xd), also solved missing tail issue.

DL: https://civitai.com/api/download/models/182988?type=Model&format=SafeTensor&size=pruned&fp=fp16

v65:

1
2
3
4
5
6
Based on v60, dreamshaper_v8, and midjourney loras.
A similar but different model for v35, its a model with a strong Midjourney photorealistic style and HDR.
Trained with Midjourney images, could probably do a lot of MJ realistic styles.
Note that this model doesnt like tails, tends to do ferals, also the characters body may not be fully covered by fur (or become human xd). Clip skip = 1.

DL: https://civitai.com/api/download/models/156771?type=Model&format=SafeTensor&size=pruned&fp=fp16

v50:

1
2
3
4
A more common model than v35 with weaker style and better compatibility.
No (or less) midjourney style this time (couldnt find dataset to make loras also its damn tiring to merge loras or MBW models I dont wanna do it again xd). Based on v45, v35, and new dawn. Clip skip = 1.

DL: https://civitai.com/api/download/models/136703?type=Model&format=SafeTensor&size=pruned&fp=fp16

FluffyBasedKemonoMegaresE71

https://pixeldrain.com/u/9UA8KMfF

Queasyfluff

I'd recommend setting CFG rescale down to around 15-35 and maybe prompt high contrast or vibrant colors.
Higher CFG rescale tends to bleach colors.
People keep asking what's in the mix so:
QuEasyFluff (regret this name already) is an easyfluff TrainDifference merge of easyfluff10-prerelease with a custom blockmerge non-furry realism model I made some time back that was made by merging:
HenmixReal_v30
EpicRealism_PureEvolutionV3
LazymixRealAmateur_v10

Model: https://pixeldrain.com/u/71ZWunuG
Yaml: https://pixeldrain.com/u/aoxveaCu

For added realism, try using Furtastic's negative embeddings (found here).

Queasyfluff V2
What's different about this version?

Fixed colors and CFG rescale issue
Follows directions a bit better
That's about it. Just does 3d and realism better than base easyfluff but still does great drawing style too

EF10-prerelease based: https://pixeldrain.com/u/sfB3fC58
EF11.2 based: https://pixeldrain.com/u/HLALVhng
Yamls: https://pixeldrain.com/u/ZCg93wph

0.7(Bacchusv31)+0.3(5050(BB95v11+Furtastic2))-pruned.safetensors

https://pixeldrain.com/u/WLcp8sTU

BBroFurrymix V4.0

Courtesy of an anon from /b/

https://pixeldrain.com/u/oRtJ9CRy

Dream Porn (Mix)

That's a custom frankenstein mix I made.
Can't remember exactly whats in there.
r34
dgrademix
dreamlike photoreal
???

https://pixeldrain.com/u/ZiB5vT28

SeaArt Furry XL 1.0
DDL: https://civitai.com/api/download/models/437061?type=Model&format=SafeTensor&size=full&fp=fp16
VAE: https://civitai.com/api/download/models/437061?type=VAE&format=SafeTensor
(The VAE is the "usual" sdxl_vae.safetensors, if you've already used a SDXL model before you won't need to download this again)

Prompt Structure:
The model was trained with a specific calibration order: species, artist, image detail, quality hint, image nsfw level. It is recommended to construct prompts following this order for optimal results. For example:
Prompt input: "canid, canine, fox, mammal, red_fox, true_fox, foxgirl83, photonoko, day, digitigrade, fluffy, fluffy_tail, fur, orange_body, orange_fur, orange_tail, solo, sunlight, tail, mid, 2018, digital_media_(artwork), hi_res, masterpiece"

Species and Character Calibration:
We have provided a series of nouns for main species calibration such as mammals, birds, and have repeatedly trained on specific furry characters. This helps in generating more accurate character images.

Quality Hints:
The model supports various levels of quality hints, from "masterpiece" to "worst quality". Be aware that "masterpiece" and "best quality" may lean towards nsfw content.

Artwork Timing:
To get images in the style of specific periods, you can use time calibrations like "newest", "late", "mid", "early", "oldest". For instance, "newest" can be used for generating images with the most current styles.

Recommended Image Sizes:
For best quality images, it is recommended to generate using one of the following sizes: 1024x1024, 1152x896, 896x1152, etc. These sizes were more frequently used in training, making the model better adapted to them.

Dimensions Aspect Ratio
1024 x 1024 1:1 Square
1152 x 896 9:7
896 x 1152 7:9
1216 x 832 19:13
832 x 1216 13:19
1344 x 768 7:4 Horizontal
768 x 1344 4:7 Vertical
1536 x 640 12:5 Horizontal
640 x 1536 5:12 Vertical

SeaArt + Autismmix Negative LoRA
https://civitai.com/models/421889
SeaArt: https://civitai.com/api/download/models/470089?type=Model&format=SafeTensor
Autismmix: https://civitai.com/api/download/models/475811?type=Model&format=SafeTensor

Inspired by the popular Boring_e621 negative embedding https://civitai.com/models/87781?modelVersionId=94126 , this is a negative LORA trained on thousands of images across years of data from different boorus with 0 favorites, negative scores, and/or bad tags like "low quality." Therefore putting it in the Negative Prompt tells the AI to avoid these things, which results in higher quality and more interesting images.

Pros:

-Generally increases quality which means more details, better depth with more shading and lighting effects, brighter colors and better contrast

Could be Pro or Con depending on what you want:

-Tends to generate more detailed backgrounds

-Tends towards a more detailed or even more realistic look

Cons:

-Many of the training images were low resolution sketches or MSPaint style doodles, if you are trying to generate sketches or doodle style work putting this in the negatives may be detrimental

-Many of the training images were black and white sketches or otherwise monochrome/grayscale, if you are trying to generate images without color putting this in the negatives may be detrimental

-Accidentally putting this in the positive prompt instead of the negative prompt reduces the quality of images

Test images was done using a weight of 1 with only this LORA in the negative prompt. You can adjust the weight of the LORA to change the impact, however in my testing the impact of different weights was minimal.

The first version uploaded "boring_SDXL_negative_LORA_SeaArtXL_v1" was trained on the Sea Art XL model (https://civitai.com/models/391781/seaart-furry-xl-10) and is intended to be used with that model. A version for AutismMix SDXL (https://civitai.com/models/288584?modelVersionId=324524) is currently being trained. Please feel free to request if you want a version trained specifically for any other SD XL model.

Compassmix XL Lightning
https://civitai.com/models/498370/compassmix-xl-lightning
DDL: https://civitai.com/api/download/models/553994?type=Model&format=SafeTensor&size=full&fp=fp16

Indigo Furry Mix XL
https://civitai.com/models/579632?modelVersionId=646486
V1.0 DDL: https://civitai.com/api/download/models/646486?type=Model&format=SafeTensor&size=pruned&fp=fp16

This is a (test) XL model based on pony XL with a little modifications, can react to some artist tags due to Seaart Furry XL mixed in. This model is mainly for anime bara kemono content, should be ok for all contents.

For v1.0:

It's ok to use 'score' tags or 'zPDXL' embeddings or not, use score tags will generate kemono-styled (japanese furries) images, not using score tags will generate western-styled (those common ones on e621) images.

Prompt length can significantly affects the style and the effect of score tags.

Can react to some artist tags.

Optional style tags: 'by mj5', 'by niji5', 'by niji6'.

Sometimes output images can be too yellow with score tags. (especially with short prompts)

Shixmix_QueasyIndigo SD1.5
https://pixeldrain.com/u/Es8VrAyD -model
https://pixeldrain.com/u/v8gi82uy -yaml

Galleries

FluffAnon's Generations

https://mega.nz/folder/oqxUXbZb##0w9iSSlL9gO0W_eZ65HU8g

Yttreia's Stuff

https://mega.nz/folder/mb5ACDhQ##o1VQjNnuXzhp0dKH6Aza7Q

Quad-Artist combos

250 hand-picked quad-artist combos, out of ~2800, each genned with 6 different scenarios for a total of 1500 raw gens.
Best viewed by resizing your window so that each row has 6 (or a multiple of it) images.

https://mega.nz/folder/YyIlhIzI#fr38ge0n-1M0AeBuwzjPNw

Artist list as a .txt: https://files.catbox.moe/7ky7fb.txt

SeaArt Artist Combination Examples

https://mega.nz/folder/UvZg1ZiR#MXc-Ax86OLTC4WKUgm9mcA

I made a triple roll with SeaArtXL for 363 artists, based on the prompt used by the anon that posted the bunnies in the last thread. All 1089 gens can be found here:
There's a .txt file at the end with the artist list.
As usual it's best viewed by resizing the window so that there are a multiple of three images per row.

PonyXL LoRAs made by /h/

Basically just made a python script to download all the LoRAs in this rentry: https://rentry.org/ponyxl_loras_n_stuff . There's a powershell script in there that also downloads everything, but I'm on Linux which doesn't run that natively. Python is just more accessible in my opinion.

Catboxed them here if you want to add them:

https://files.catbox.moe/ujz8p4.txt
https://files.catbox.moe/c63lpj.py

The txt is just a list of the urls with the artist they correspond to. The .py file reads the .txt and downloads everything in the text file. Anyone can edit the txt file to add or remove LoRAs if you're downloading a large batch.

Strictly speaking, this script works for more than just this rentry. It can basically just download any number of files from a list of URLs.

LORAs from the Discord

Various Characters (FinalEclipse's Trash Pile)

• Dawn Bellwether (Zootopia)
• Esix (e621 mascot)
• Fidget (Elysian Tail)
• Freya Crescent (Final Fantasy)
• Gadget Hackwrench (Rescue Rangers)
• Gazelle (Zootopia)
• Human Male x Female Anthro
• Jenna (Balto)
• Juno (Beastars)
• Katia Managan (Prequel webcomic)
• Krystal (Star Fox)
• Kurama - Female (Naruto)
• Kurama - Male (Naruto)
• Lamb (Cult of the Lamb)
• Maid Marian (Robin Hood)
• Master Tigress (Kung Fu Panda)
• Millie (Helluva Boss)
• Nicole Watterson (The Amazing World of Gumball)
• Porsha Crystal (Sing 2)
• Rivet (Ratchet and Clank)
• Roxanne (FNAF)
• Toriel (Undertale)
• Tristana (League of Legends)
• Vicar Amelia (Bloodborne)
https://mega.nz/folder/1m51RTjI##ZmcA4WUuskdXq0ggQCs8BQ (OLD)
https://drive.google.com/drive/folders/1B41fkQ6RwEWamfc5YE4yC8ZQVz-DUUEF

BulkedUp

1
2
3
4
5
Here is a LoRA, BulkedUp, that was made with Kohya's GUI. The purpose of this LoRA was to create bigger buff dudes on different Stable Diffusion models. I personally use between 0.2 to 0.5 strength, with 0.2 strength adding a bit of muscle and 0.5 going even bigger. Compared to the Hypernetworks I have worked on, I believe that LoRAs are a great alternative for training with shorter training time and better generations. However, from what I've seen on how this LoRA behaves, it seems to reflect the art style of its respective artist that it was trained on at 0.6 strength and above. Due to this discovery, I will provide the training dataset for the LoRA in the link.

Using default E621 tags with spaces, like huge muscles, works really well with the LoRA

One thing I would like to mention about this LoRA is that if it output dudes that are too huge or have bizarre anatomy, sending them to img2img or inpaint with high denoising strength (between 0.4-0.7) could really help fix them.

Here is the link to the LoRA, model formula, training dataset, and images of the examples:

NOTE: The merged model the LoRA was trained on requires the VAE, vae-ft-mse-840000-ema-pruned.ckpt, from stabilityai 
https://mega.nz/folder/BRVVSYZT##hc4dSxLbjXPZQ5EEGh973A

Protogens

1
2
3
4
5
6
7
8
9
protogen - Obsolete version trained on 2400 steps

protogenv2 - Newer version trained on roughly 3200 steps

protogenv2-0004 and protogen-0005 - If standard v2 feels too overfitted/overtrained to you, use these

Activation keywords are:

protogen, protogen visor, protogen face

Link:

https://mega.nz/folder/C2R2ESCT##uwszxIuh6fYm4iq6xu3WsQ

Mr. Wolf (The Bad Guys)

Mr. Wolf from The Bad Guys, but he's a LoRA now.

Responds very well to higher weighting, like :1.3 or :1.2. The LoRA is trained at 704 resolution, so it works best at that size.

Issues: It needs testing on the paws, something seems to be up with them. Also his suit usually doesn't make sense if you look too closely

Also, yes it does nsfw.

This model is trained on Gay621

https://pixeldrain.com/u/PnmW8Zoe

https://mega.nz/file/315EiDCD##bsH75Mh00i7Ts6chY99rQI9gP__DJpidbqDd2MbdVPs

Wizzikt

~300 images from Wizzikt.
Download link: https://pixeldrain.com/u/yqadCyMz

beeg wolf wife generator (Sligarthetiger)

My first attempt at a LoRA. This is LoRA Trained on 150 works by Sligarthetiger at around 4000 steps for 6 epochs. Contains two versions.


v2 is trained on Lawlas Yiffy Mix. It isn't as stable as v3, however is more accurate to the training dataset I feel.

v3 is trained on a certain anime model. More coherent and stable and personally probably better, although sometimes isn't too accurate to the data at times.


I recommend using both and keeping v2 at a weight of around 0.10 or 0.15.

As it a style LoRA, other character LoRAs work as well.

Instructions:

Simply select the LoRA through whatever way you usually would through the A111 extension or its native support. No activation keywords needed, it should activate on its own.

Link:

https://mega.nz/folder/vuJUyaAa##ncWjDuMmnQmFoPLf0dw-YA

Cervids

https://pixeldrain.com/u/3a6yvbTD

Various (Penis Lineup, Kass, Krystal, Loona, Protogen, Puro, Spyro, Toothless

https://mega.nz/folder/UBxDgIyL#K9NJtrWTcvEQtoTl508KiA

Puffin's LoRAs

Pic taken 2023/05/16
Puffin's Stuff

Looking them over, some of these are likely the same ones posted before ITT, currently filed under "Birds" up above. Gonna leave it up, for posterity's sake.
Some of these are Lycoris files; check out this extension if you encounter problems.

Tweetfur: https://civitai.com/api/download/models/11442?type=Model&format=SafeTensor&size=full&fp=fp16
Puffin: https://civitai.com/api/download/models/11432?type=Model&format=SafeTensor&size=full&fp=fp16
Anthro Griffin: https://civitai.com/api/download/models/30044?type=Model&format=SafeTensor&size=full&fp=fp16
Mae Borowski (Night in the Woods): https://civitai.com/api/download/models/30127?type=Model&format=SafeTensor&size=full&fp=fp16
Marie Itami (Brand New Animal): https://civitai.com/api/download/models/30940?type=Model&format=SafeTensor&size=full&fp=fp16
Bea Santello (Night in the Woods): https://civitai.com/api/download/models/31668?type=Model&format=SafeTensor&size=full&fp=fp16
Cockatiel: https://civitai.com/api/download/models/11446?type=Model&format=SafeTensor&size=full&fp=fp16
Anthro Birds: https://civitai.com/api/download/models/32214?type=Model&format=SafeTensor&size=full&fp=fp16
Rito (Species, BotW): https://civitai.com/api/download/models/41394?type=Model&format=SafeTensor
Falco (Star Fox): https://civitai.com/api/download/models/42650?type=Model&format=SafeTensor
Coco Bandicoot: https://civitai.com/api/download/models/57895?type=Model&format=SafeTensor
Elora (Spyro): https://civitai.com/api/download/models/58081?type=Model&format=SafeTensor
Zorayas (Elden Ring): https://civitai.com/api/download/models/59321?type=Model&format=SafeTensor
Tempest Shadow (MLP): https://civitai.com/api/download/models/62278?type=Model&format=SafeTensor
Secretary Bird: https://civitai.com/api/download/models/63229?type=Model&format=SafeTensor
Anthro Corvids: https://civitai.com/api/download/models/64462?type=Model&format=SafeTensor

Cynfall's LoRAs

https://mega.nz/folder/DRI0RY4Q#g1IJ7Ch1hM6-sAG7dGkJ7g

Brooklyn (gargoyles) USE Brooklyn (gargoyles)
Bathym USE bathym
Blaidd USE Blaidd (elden ring)
Batzz USE demon lord dragon batzz 
Barrel USE barrel (live a hero)
Exveemon USE exveemon
Death USE death (puss in boots)
Dire USE Dire (fortnite)
Fox Mccloud USE Fox Mccloud
Fenrir USE fenrir (housamo)
Garmr USE garmr
Freddy USE freddy (dislyte)
Guilmon USE guilmon
Horkeu kamui USE horkeu kamui (tas)
Incineroar USE incineroar
Jon talbain USE jon talbain
Law USE law (sdorica)
Leomon USE leomon
Macan USE macan (tas)
Maliketh USE Maliketh (elden ring)
Meowscles USE meowscles
Mountain USE mountain (arknights)
Nasus USE nasus (lol)
Nimbus USE nimbus (world flipper)
Renekton USE renekton
Seth USE set (tas)
Shirou Ogami USE Shirou ogami
Simba USE simba
Skavens USE skaven
Steel USE steel (balto)
Tadatomo USE tadatomo
Volibear USE volibear
Vortex USE Vortex (helluva boss)
Wargreymon USE wargreymon
Warwick USE warwick (lol)
Weregarurumon USE weregarurumon 
Wolf O'Donnell USE Wolf O'Donnell

Feral on Female

https://mega.nz/folder/hbgTWYTa#4rngMt0MEhMAw6D02t-coQ

Valstrix's Gathering Hub (Monster Hunter and more)

https://drive.google.com/drive/folders/1N3QB9oAGJIv4dLNzEIvNQj7LkKrS6_y4

Slugcats (RainWorld)

From: https://civitai.com/models/94795/slugcats-rainworld-wip
https://civitai.com/api/download/models/101116?type=Model&format=SafeTensor

AnonTK's LoRA Repository (TwoKinds LoRAs)

https://mega.nz/folder/DtFz1IbQ#wZJFX0aYEL4rKwBBmgyaZQ

Tom Fischbach Style LoRA - PDXL

https://mega.nz/folder/oXZHwAIb#LarZqlfkp9Zr45suaZkRZw

Assorted Random Stuff

Artist comparisons

PDV6XL Artist Tag Comparison
Samples made using https://civitai.com/models/317578/pdv6xl-artist-tags
https://mega.nz/folder/YXcEgBCK#Jydfj9qF9IXyCjWhfw8rnQ

Autismmix XL Artist/Hashes Comparison
https://mega.nz/folder/x25QyQhL#MWpbJfSDIBo6dpn9APpGVQ

PDXL Artist/Hashes Comparison
https://mega.nz/folder/ErBSQR7A#MPUWcWy9bA9QEJzn0unM-Q

SeaArt Artist Comparison
https://mega.nz/folder/MrgAiA6J#bt6z2MnMhcWsUVKoSpKuQg

SeaArt Furry XL SDXL Base Artist Comparison:
https://rentry.org/sdg-seaart-artists

Yiffmix v52 Artist Tests: https://mega.nz/folder/JgRG2a7L#rL9o48_vVxere2lwQjShUg

Working artists Easyfluff Comparison V1

I regenned my artist collection word document thing because I figured I could with dynamic prompts.
Also there's more artists now thanks to 4 artist combo anon providing them.
I think this will be the final version for a while, details of further improvements and changes outlined in the PDF.

V1 Males: https://www.mediafire.com/file/g7y2i8u239k9zut/Working_artists_redemption_arc_edition.pdf/file
V1 Females: https://www.mediafire.com/file/mbg5rk9c34h243y/Working_artists_cooties_edition.pdf/file
V1 Base SD Artists: https://www.mediafire.com/file/mnqv47mgji4baa4/Working_artists_real_artists_edition.pdf/file

Working artists Easyfluff Comparison V2 (up to 28725 different e621 based artist previews)
Explanatory Memorandum: https://www.mediafire.com/file/ymxqexl6r3nl68s/Working_artists_explanatory_memorandum.pdf/file

I have generated images using a list of 28725 artist tags from the e621 csv file which came with the dominikdoom's tagcomplete extension.
Due to the strain making these puts on my system, these images have been split up into 9 PDF files with 3192 images in each. I find this also makes it relatively light on the client side as well, resource wise
The images have been placed in descending order based on how many images that artist had tagged to them on e621 at the time the csv file was made. The artist prompts used and the amount of tagged images is displayed above each picture. This text is selectable for convenient copy+pasting.
The parameters used for these guides will be placed in the PDFs.

https://www.mediafire.com/folder/3kv4l4l3c6sgq/SD+Artist+Prompt+Resources

(Not embedded due to filesize)
Comparison of Base SD-Artist - Furry artist combos (done on an older furry model, likely YiffAnything)

https://files.catbox.moe/fs3blo.jpg

thebigslick/syuro/anchee/raiji/redrusker/burgerkiss/blushbrush Prompt Matrix comparison (done on a Fluffyrock-Crosskemono 70/30 merge)

https://mega.nz/file/fw0ggaaQ#nGpCzW7C7u3Q5w7sr15azX7GO8jdSBRTyjqsKriv60A

Vixen in Swimsuit artist examples (Model: 0.3(acidfur_v10) + 0.7(0.5(fluffyrock-576-704-832-960-1088-lion-low-lr-e22-offset-noise-e7) + 0.5(fluffusion_r1_e20_640x_50)) .safetensors) (DL link can be found above)

https://mega.nz/folder/kPEjUaDB#n-IIguEypQkfnfvig0EH4w

Artist examples using Toriel as an example (0.5 (0.7fluffyrock0.3crosskemono) + 0.5 fluffusion)

https://mega.nz/folder/vAhT1CjQ#6jDFFA4VDWpZTnrgSeEevQ

Big artist comparison

https://files.catbox.moe/hi3crm.pdf

EasyFluff Comparisons
https://rentry.org/easyfluffcomparison/

Easyfluff V10 Prerelease

There are artist comparisons and they are all nice and stuff but I was curious how scalies would turn out.
Here are some of a dragoness, if anyone cares.
I went with detailed scales and background.
Unfortunately, I had FreeU turned on, so it's not going to be perfect for all of you.
They are broken up at a more or less random spot to keep them from getting insanely huge.
I haven't examined them all yet. Just thought I'd share.

https://mega.nz/folder/tHMTkDxZ#ga3iHKb_7AHpgzSH2YDGfg

EasyFluff V11.2 Comparison

1
2
3
4
5
6
7
Prompt used:
by <artist>,  a nude male anthro coyote standing in the water, mesa, canyon, rocks, sheath, penis tip, balls, partially submerged, (worm's-eye view,:0.9) (hazel eyes:1.1), looking at viewer, tail,
BREAK (masterpiece, best quality:1.2),  pinup,
Negative prompt: EasyNegative, boring_e621_fluffyrock_v4
Steps: 35, Seed: 3945749951, Sampler: DPM++ 2M SDE Karras, CFG scale: 6, Size: 512x768, Batch: 6x1, Parser: Full parser, Model: EasyFluffV11.2, Model hash: 821628644e, VAE: vae-ft-mse-840000-ema-pruned, Backend: Original, Version: 5142b2a, Operations: txt2img; hires; txt2img; hires; txt2img; hires; txt2img; hires, Hires steps: 20, Hires upscaler: 4x_foolhardy_Remacri, Hires upscale: 1.5, Hires resize: 0x0, Hires size: 768x1152, Denoising strength: 0.4, Latent sampler: DPM++ 2M SDE Karras, Image CFG scale: 6, Token merging ratio hr: 0.5, Dynamic thresholding enabled: True, Mimic scale: 7, Threshold percentile: 100

https://files.catbox.moe/yeiuv1.jpg

0.6(fluffyrock-576-704-832-960-1088-lion-low-lr-e209-terminal-snr-e182) + 0.4(furtasticv20_furtasticv20) Comparison
https://pixeldrain.com/u/xUVfbjdc

1
2
3
4
5
6
Prompt used:    
(by artist:1.3), pupils, eyebrows, turf, walking, (front view), standing, full-length portrait, model sheet,
BREAK
(white fur, black fur, grey fur, snow leopard:1.3), anthro, solo, male, (muscular:1.0), long tail, (anus, butt), (big balls, ball tuft, (penis), (thick penis), big penis, veiny penis), looking at viewer, seductive, smile,
Negative prompt: boring_e621, kemono, young, cub, (hair, neon hair, long hair), female, woman, boobs, girly, (wolf, fox, bear, stripes:1.3), (yellow fur, grey fur, pink fur, blue fur:1.4), canine cock, multiple tails, handpaw, feral, sharp teeth, fangs, tired, black eyelashes, black sclera, (human lips:1.9), vore, simple background, ubbp, bwu, updn, (eyes closed, narrowed eyes), macro, grass, snow, outside,
Steps: 22, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2147328661, Size: 704x704, Model hash: 96bf03cefa, Model: 0.6(fluffyrock-576-704-832-960-1088-lion-low-lr-e209-terminal-snr-e182) + 0.4(furtasticv20_furtasticv20),

https://files.catbox.moe/5ylrek.pdf

1
2
3
furry, (((fur))), male, digital_art, (completely nude:1.3), penis, foreskin, testicles, scrotum, (detailed background, outside, city streets), in front of shop, walking, on footpath, crowd, exhibitionism, cumdrip, looking at viewer, by
Negative prompt: worst quality, low quality, multi limb, boring_e621_v4, sepia, simple background, monochrome, muscular, boring_e621_fluffyrock_v4
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7.5, Seed: 1959977494, Size: 600x600, Model hash: d66e4da0d7, Model: EasyFluffV11.1, VAE hash: c6a580b13a, VAE: vae-ft-mse-840000-ema-pruned.ckpt, CFG Rescale: 0.4, Version: 1.6.1

Ponydiffusionv6 XL Artist Comparison

Nabbed an anons artist list from /h/ and put together some grids for PDXL, gonna do one for anime girls later and maybe run the EF artist list I got just to see what does and doesn't work

https://mega.nz/folder/kvwQHLiA#fmI-1cgoCagt3vBEhHudag

Autismmix Artist Comparison

Went ahead and put together an xyz spreadsheet for autismmix of all the artists from a txt file posted some threads ago

https://mega.nz/folder/x25QyQhL#MWpbJfSDIBo6dpn9APpGVQ

CompassMix artist mixes

So far it's just 55 mixes, split between two folders: one with plain triple mixes and one with weighted quadruple mixes.

https://mega.nz/folder/EmB1nR5Q#99x_PAvjw5L5a1FcDzpApg

Different LORA sliders - what do they mean?

XY-Plot LORA sliders

1
2
3
4
5
6
7
Flick on the master switch, pick lora from the drop down, drag the slider. Flick the other switch for dual sliders. Speaking very loosely, the top slider and bottom slider affect how much the lora changes the result's shape and coloring.

Have a demonstration, picrel. Asked for a jackal in a kitchen making pizza on a mixed model (mostly anythingfurry), using a lora to try to make the jackal into a Lucario.

UNet alone needed a lot of weight, but colors the figure like a Lucario without changing the figure's body much.
TEnc alone makes the jackal gain Lucario characteristics without being Lucairo; specifically we see the hips and "shorts" turning into clothing.
Both going beyond 1 starts changing the scene, probably applying training data too strongly. Both going negative turned the jackal into a fox and made a mess of features.

What about samplers?

Sampler Examplesv3

1
2
3
4
5
6
7
8
SDE is normally used at lower steps than other samplers.
12 steps in SDE have around the same effect as 20 steps in other samplers.

I mostly tend to use the DPM++ 2m Karras or DPM++ SDE Karras samplers, with 20 or 12 steps respectively for testing and playing around, and 35 and 20 for "serious" (lol) prompting.

Euler a at 20 steps is also pretty good at prompt testing, and DDIM, from what others in these threads say, is good for fur-looking fur.

Like any aspect when it comes to SD, there is no rught answer that always works, however.

Script for comparing models

54395615

dropping by from /g/ to drop a random technical guide for comparing the similarity of different models.
Script source is https://huggingface.co/JosephusCheung/ASimilarityCalculatior, the documentation for said script however is pretty attrocious, so I made my own.

Okay, thanks? What am I supposed to do with this?

1
2
3
It might be helpful for figuring out if a model is similar enough for a LoRA to still work, it could help with determining if a model is worth merging into a different model, it could help with identifying models that were used in the merging of a different model where the author refuses to share the recipe, etc.

Anyways, hope it helps someone.

Wildcards

Use either the Wildcard or Dynamic prompts extensions!
List of wildcards: https://rentry.org/NAIwildcards
Dynamic prompts wildcards: https://github.com/adieyal/sd-dynamic-prompts/tree/main/collections
(Work even without the dynamic prompts extension if you prefer the older one, just grab the .txt files.)
Most modern models were trained on the majority if not all of E621.
You can grab a .csv containing all e621 tags from https://e621.net/db_export/ and filter for Category 1.
Here is an artist listing of the entirety of e621, sorted by number of posts (dated mid Oct. 2023): https://files.catbox.moe/mjs8jh.txt

All artists from fluffyrock.csv sorted by number of posts: https://files.catbox.moe/vtch6n.txt

Pose tags: https://rentry.org/9y5vwuak

Wildcards collection: https://files.catbox.moe/lwh0fx.7z

Species Wildcards Collection: https://rentry.co/4sy6i33r

Huge Wildcard Collection sorted by artist types, poses, media etc.: https://mega.nz/folder/UBxDgIyL#K9NJtrWTcvEQtoTl508KiA/folder/pJR0mLjb

Pony Diffusion XL V6 Wildcards:
e621: https://files.catbox.moe/icf7ak.txt
Danbooru: https://files.catbox.moe/k2pgw2.txt

PDXL V6 artist tags with at least moderate effect on gens
https://files.catbox.moe/a1srau.zip

e621 Character Wildcard with supporting tags: https://files.catbox.moe/4k91ms.txt

OpenPose Model

1
2
3
For use in Blender; allows for posing for use in the ControlNet extension.

https://toyxyz.gumroad.com/l/ciojz

"What does ControlNet weight and guidance mean?"

ControlNet Weight and Guidance Rate

Img2Img examples

Raw doodle Final result
https://imgbox.com/g/tdpJerkXh6

Newer example showcasing workflow

E621 Tagger Model for use in WD Tagger

!!NEW!! Zack3D's tagger model (see below) is quite old by now; Thessalo has made a newer, better model which sadly has not been adapted to WD Tagger and the like just yet.
Link to the model: https://huggingface.co/Thouph/eva02-clip-vit-large-7704/tree/main
Batch inference script for use with Thessalo's Tagger model: https://mega.nz/folder/OoYWzR6L#psN69wnC2ljJ9OQS2FDHoQ/folder/HwgngBxI

Reminder that the prior GitHub repo has been discontinued; delete the extension's folder and install https://github.com/picobyte/stable-diffusion-webui-wd14-tagger instead, which reportedly works even with WebUI 1.6.
The patch below seems to NOT BE NEEDED anymore as of Oct 17th 2023 if you are using the picobyte repo. Download only the Convnext V2 model, and place it as described.

The WD Tagger extension as-is only generates Danbooru tags, which is great when training on NAI and other anime-based models. For models based on e621, the tags may need to be changed accordingly. For that reason, you can use the following model instead of the WD one.

E621 Tagger

1
2
3
4
5
6
Convnext V2: https://pixeldrain.com/u/iNMyyi2w
Patched WD1.4: https://cdn.discordapp.com/attachments/1065785788698218526/1067966541699743845/stable-diffusion-webui-wd14-tagger.zip
Mirror for Patch: https://pixeldrain.com/u/NA5fvUcJ

If you encounter problems while using the convnext model, try unchecking "Sort alphabetically" in the extension
Older Deepdanbooru Model: https://pixeldrain.com/u/XTcj5GHz

Older Deepdanbooru model

Upscaler Model Database

Recommendations are Lollypop and Remacri. Put in models/ESRGAN
https://upscale.wiki/wiki/Model_Database

LoCon/LoHA Training Script / DAdaptation Guide

https://rentry.co/dadaptguide

Click me for larger view
Click me for larger view
Click me for larger view
Click me for larger view
Click me for larger view

Script: files.catbox.moe/tqjl6o.json
Gallery: imgur.com/a/pIsYk1i
www.sdcompendium.com

Script for building a prompt from a lora's metadata tags

Place into your WebUI base folder. Run with the following command:
python .\loratags.py .\model\lora\<YOURLORA>.safetensors
https://pastebin.com/S7XYxZT1

Example workflows

Text2Image to Inpaint to SD Upscale Example:
Example Workflow

Using ControlNet to work from sketches:

Usually I start with a pretty rough sketch and describe the sketch in the prompt, along with whatever style I want. Then I'll gen until I get one that's the general idea of what I want, then img2img that a few dozen times and pick the best one out of that batch. I'll also look through all the other ones for parts that I like from each. It could be a paw here or a nose there, or even just a particular glint of light I like. I'll composite the best parts together with photoshop and sometimes airbrush in certain things I want, then img2img again. When I get it close enough to the finished product I'll do the final upscale.

Picrel is a gif of another one I've posted here that shows what these iterations can look like

Sketch to genned image

I use lineart controlnet with no preprocessor (just make sure your sketch is white-lines-on-black-background or use the invert processor.) Turn the control weight down a bit. The rougher the sketch, the lower your control weight should be. Usually around 0.3-0.7 is a good range.
Couldn't you theoretically use the lineart preprocessor to turn an image into a sketch, and then make adjustments to it there if you want to add or remove features?
Good call, that's exactly what I did with part 3 of the mouse series. Mouse part is mine, the rest is preprocessor.

Adding sketches to preprocessir

Tutorials by fluffscaler

Inpainting: https://rentry.org/fluffscaler
PDXL: https://rentry.org/fluffscaler-pdxl

CDTuner ComfyUI Custom Nodes

A kind anon wrote custom nodes for ComfyUI to achieve similar results as https://github.com/hako-mikan/sd-webui-cd-tuner.
https://rentry.org/r9isz
Copy the script into a text file, rename it to cdtuner.py, and put it into your custom_nodes folder inside your ComfyUI install.

Do not make the same mistake I did: only save the contents of the script, if you use Export > Raw, make sure to remove the everything before the first import and after the last }.

The nodes are now called:
SaturationTuner
ColorTuner
LatentColorTuner
I made some slight changes because the original CDTuner implementation is a bit weird.
LatentColorTuner will allow you to edit latent colors with similar sliders to CDTuner but you don't need to re-generate images all the time.
The actual ColorTuner which is implemented almost the same as CDTuner is a bit of hack job because you can't really get the step count back out of a sampler. So rather than editing just the cond/uncond pair in the last step it edits them at all steps. I think this leads to the changes being a bit better integrated into the images, but it's different from A1111 CDTuner.

Input and output are the same type so you just plop it as a middle man before the node you want it to apply to
Gives different effects depending on where you place it

ComfyUI Buffer Nodes

https://rentry.org/dgbfb

Sillytavern Character Sprites

Piko
https://files.catbox.moe/wqmnxv.rar

Tomoko Kuroki
https://files.catbox.moe/iyorf6.rar

fay_spaniel
https://files.catbox.moe/7jj4mz.rar

Elora
https://files.catbox.moe/kalun9.rar

Character sprites were made using Easyfluff v10 with character lora while using Control Net ((Reference only)) and changing the expressions for each sprites.

Easyfluff V11.2 HLL LoRA (Weebify your gens)

A set of LoRAs trained on Dan- and Gelbooru images and Easyfluff, allowing for better non-furry gens and use of anime artists.
A guide can be found here

NAI Furry v3 random fake artist tags

NAI Diffusion Furry v3, similar to PDXL, no longer has artist tags, or at least obfuscated them. The pastebin below has examples of fake artist tags that yield different styles.
https://pastebin.com/LAF342fY

Edit
Pub: 18 Jan 2023 22:28 UTC
Edit: 28 Sep 2024 08:22 UTC
Views: 222836