fantasy.ai is a setup for lawsuit scam. Do not trust it, or your ass gonna become a property of that scammer. No "% shares" will return the intellectual property you are going to lose. The author of it was before a NFT scammer.
Do Backups There are many reasons for things to perish.
civitai.com deletes models, and sometimes just by mistake!!!!
https://rentry.org/clipfix - This rentry is about FIXING bad merges and models.
You can do module training for text AI. It can be run on KoboldAI GUI and is popular on /aids/. That means you can make it focus on desired content, like replaying stories from a book, but transferring them to desired TableTop RPG systems. That also includes making Stable Diffusion models to accompany them, in the same GUI. Facebook's LLaMA is trained on different data so it can give an unique feel to generations, if modules are trained for it.
Latest update, the other are in XX._Changelog:
- Thursday, 4 May 2023
Added link to
Frosting, a free in-browser AI generatior to 1.2.1._Online_providers.
If you don't like this rentry, there is other and probably more up to date:
- https://rentry.org/sdg-link (4chan's Technology compendium)
- https://rentry.org/hdgfaq (4chan's Hentai compendium)
FOR YOU SAFETY, ALWAYS SCAN FOR PICKLES BEFORE USING MODELS FOUND ONLINE. THOSE TOO!
Best way is to download only files that are in
safetensors format. Or if you want to share you model, you should convert it to it. Converting does not scan for pickles, so scan first, then convert.
safetensors files can be compressed to up 50% of size with 7zip.
safetensors are theoretically pickle free format and run a bit better in WEBUI
Hash numbers in webui are bad. They are incomplete. AND their process was already changed once, making old hash irrelevant. ANYONE CAN UNZIP A MODEL FILE AND REPLACE STARTING FILES TO MATCH HASH NUMBERS FROM WEBUI!!! Use tools like https://github.com/gurnec/HashCheck instead.
SD RESOURCE GOLDMINE was a main source of information, but it still holds essentials
|SD RESOURCE GOLDMINE||SD RESOURCE GOLDMINE 2||SD RESOURCE GOLDMINE 3|
SD based news and links. Archives of drama and sources. rentry limit reached so it continues to the other link
https://rentry.org/zk4u5 Japanese version of SD Resource Goldmine, some additional links and sources
|https://sweet-hall-e72.notion.site/A-Traveler-s-Guide-to-the-Latent-Space-85efba7e5e6a40e5bd3cae980f30235f||A Traveler’s Guide to the Latent Space Big article/guide on AI Art.|
|https://github.com/AUTOMATIC1111/stable-diffusion-webui||AUTOMATIC1111's WEBUI Mainly used, advanced tool that has many customizable extensions.|
|https://github.com/AbdBarho/stable-diffusion-webui-docker||Docker clone of AUTOMATIC1111's WEBUI Emulated/Separate instance, so it is a bit more stable to work. Some extensions are not working.|
|https://camenduru.itch.io/stable-diffusion-webui||Easy-install clone of AUTOMATIC1111's WEBUI It will reinstall your
|https://github.com/comfyanonymous/ComfyUI||ComfyUI This is one of the most promising advanced GUI for SD. Node system similar to chaiNNer, that with given time, will probably give more options than AUTOMATIC1111's WEBUI There >click_me< are examples of what it can do.|
|https://nmkd.itch.io/t2i-gui||NMKD Stable Diffusion GUI Simple and stable gui that have some unique features. Also, it is still being expanded.|
|https://www.patreon.com/DAINAPP||Stable Diffusion GRisk Paywall patreon gui, that provides sprites animations.|
|https://github.com/invoke-ai/InvokeAI||InvokeAI It is a tool that tries to not only generate images, but also give you a wast amount of tools to edit those images.|
|https://github.com/Sygil-Dev/sygil-webui||Web-based UI for Stable Diffusion, Created by Sygil.Dev Easy and stable WEBUI, that was later replaced by AUTOMATIC1111's WEBUI.|
|https://github.com/Sanster/lama-cleaner||Lama Cleaner It lets you remove elements from images. Inpainting focused GUI.|
|https://github.com/ddPn08/Lsmith||Lsmith Lsmith is a fast StableDiffusionWebUI using high-speed inference technology with TensorRT.|
|https://github.com/lshqqytiger/stable-diffusion-webui-directml||Stable Diffusion web UI by lshqqytiger Fork of AUTOMATIC1111's WEBUI to run with AMD cards.|
|https://stable-diffusion-art.com||Blog with a lot of useful guides.|
|https://rentry.org/amddockerarch||This is for Arch Linux/Manjaro|
|https://rentry.org/sable-sdw-ubuntu-amd-gfx8||HatefulSable's Guide to Installing Stable Diffusion WebUI on Ubuntu With an AMD GFX8 GPU|
|https://rentry.org/sable-sdw-ubuntu-nvidia||HatefulSable's Guide to Installing Stable Diffusion WebUI on Ubuntu With an Nvidia GPU|
|https://rentry.org/sd-amd-fix||SD AMD Fix|
|https://rentry.org/sd-amd-gfx803-gentoo||Stable Diffusion with AMD RX580 on Gentoo (and possibly other RX4xx and RX5xx AMD cards)|
I need to make descriptions!!!
|Introduction||Dark mode UI||Saving and Managing Prompts||Prompt Weights|
|Source: imgur||Source: imgur||Source: imgur||Source: imgur|
|Merging models||Testing Prompts with Matrices||Prompt transofrmation||Outpanting||Inpainting|
|Source: imgur||Source: imgur||Source: imgur||Source: imgur||Source: imgur|
|Part 1.1. - Introduction to Img2img||Part 1.2. - Using img2img as an upscaling tool||Part 1.3. - Variants and party tricks||Part 1.4. - Creating an image with img2img|
|Source: imgur||Source: imgur||Source: imgur||Source: imgur|
|Part 2.1. - Introduction to Inpaint||Part 2.2. - Fixing up your generated images using inpaint||Part 2.3. - Creating an image with inpaint|
|Source: imgur||Source: imgur||Source: imgur|
Extensions in this main section are unique in some way.
AUTOMATIC1111's WEBUI extensions to make it able to work with other graphical tools.
|https://github.com/MemeLord3/block-merge-script||Merge Block Weighted - Script|
|https://github.com/bbc-mc/merge-percentage-visualize||Merge percentage visualize script|
|https://github.com/bbc-mc/sdweb-merge-block-weighted-gui||Awesome extension for merging. Extended guide here Also can be used to fix models. About it there. Very WIP version of block mapping by Supernut, under
|https://github.com/Maurdekye/model-kitchen||Model kitchen Allows you to automate the creation of model merges via "recipe" files.|
|https://github.com/tkalayci71/embedding-inspector||Embedding-inspector extension Inspect and mix embedding.|
|https://github.com/CodeExplode/stable-diffusion-webui-embedding-editor||A very early WIP of an embeddings editor for AUTOMATIC1111's webui.|
|https://github.com/p1atdev/stable-diffusion-webui-cafe-aesthetic||Seems like upper embedding editing, but for aesthetic instead.|
|https://github.com/LoFiApostasy/block-merge-script||Merge Block Weighted - Script This script does the same thing as sdweb-merge-block-weighted-gui. But the resulting merge is only used to generate the current prompt.|
|https://github.com/hako-mikan/sd-webui-lora-block-weight||LoRA Block Weight merging|
|https://rentry.org/Merge_Block_Weight_-china-_v1_Beta||Machine translation of chinese Merge Block Weight 魔法密录1.0Beta guide. Images under
|https://rentry.org/BlockMergeExplained||More technical explanation of functions and presets of the extensions. Recently updated.|
Supernub's Block Merge Notes
This picture only presents a crude approximation. As a guide for mapping your models. Every model is different thanks to different weights of the prompts and tensor deviations.
for Merge Block Weighted - GUI
Block Merge - color and composition
*Did a recreation of the post, because I don't what to get blind...
Source: Unstable Diffusion Discord
[Source33 *White background was annoying, so I did remade it... again... More
AUTOMATIC1111's WEBUI extensions help or extend model merging
|https://github.com/devilismyfriend/PXL8||PXL8 - Pixel Art AI A script that runs a specific finetune model. v1 of the model is free. While the v2 is 50$.|
|https://github.com/AUTOMATIC1111/stable-diffusion-webui-pixelization||Pixelization Officially supported by AUTOMATIC1111's WebUI|
|https://github.com/C10udburst/stable-diffusion-webui-scripts/tree/master/pixel_art||C10udburst's Pixel Art script|
Those are the extensions recommended in https://rentry.org/clipfix for fixing the models and merges. Read there for more. It is amazing to have this guide.
sdweb-merge-block-weighted-gui here again, because fixing models is very important in modern times where "mixing a mixes" is a thing.
Related section 18.104.22.168.4. Extensions for fixing models.
|https://github.com/iiiytn1k/sd-webui-check-tensors||Tensor checker script It will tell you if you should fix it.|
|https://github.com/arenatemp/stable-diffusion-webui-model-toolkit||Model Toolkit Export CLIP, fix Tensor etc.|
|https://github.com/bbc-mc/sdweb-merge-block-weighted-gui||Merge Block Precise merging and finalization of fixing. Alternative to
External tools of the same type here. Some of them there could work nicely with those here.
|https://github.com/deforum-art/deforum-for-automatic1111-webui||Deforum Stable Diffusion For making animations out of generations.|
|https://github.com/s9roll7/ebsynth_utility||Ebsynth utility Face capture of videos/recordings.|
|https://github.com/LonicaMewinsky/gif2gif||gif2gif Like img2img but you use gif files.|
|https://github.com/Kahsolt/stable-diffusion-webui-vid2vid||vid2vid Something like the upper one but more advanced. Read the front page closely.|
|https://github.com/fishslot/video_loopback_for_webui||Video Loopback for WebUI|
|https://files.catbox.moe/v7yvv8.py||Deforum related script, or something... (I'm not sure).|
|https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel||Try interpolating on the hidden vectors of conditioning prompt to make seemingly-continuous image sequence, or let's say a pseudo-animation.|
|https://github.com/Kahsolt/stable-diffusion-webui-sonar||Like higher but single prompt optimization|
|https://github.com/toriato/stable-diffusion-webui-wd14-tagger||Tagger for Automatic1111's WebUI Interrogate booru style tags for single or multiple image files using various models, such as DeepDanbooru.|
|https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git||Booru tag autocompletion for A1111 You can give it data of tags to recognize. Works as advertised.|
|https://github.com/mix1009/model-keyword||Automatic1111 WEBUI extension to autofill keyword for custom stable diffusion models.|
|https://github.com/adieyal/sd-dynamic-prompting||Dynamic Prompts extension The popular Wildcards v2 Extension.|
|https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards||Wildcards Script This is based on the dated fork of the upper one... = Wildcards v1 It is simpler.|
|https://github.com/camenduru/stable-diffusion-webui-artists-to-study||Artists To Study Previews of the artists and styles to use as prompts.|
|https://github.com/yfszzx/stable-diffusion-webui-inspiration||Inspiration As the above, but more options and requires additional download.|
|https://github.com/animerl/novelai-2-local-prompt||Prompt converting from NovelAI|
|https://www.patreon.com/posts/74267457||Universal AI Character Generator
|https://github.com/Zyin055/Keep-this-prompt-for-later||Keep this prompt for later Letting you save the prompt, negative prompt, and seed for selected images you generate so that you can generate them again with higher quality settings.|
|https://github.com/Vetchems/sd-lexikrea||Lexikrea Pulls prompts from Krea.ai and Lexica.art .|
|https://github.com/Zyin055/Config-Presets||Save your prompts as presets and templates.|
|https://github.com/antis0007/sd-webui-gelbooru-prompt||Gelbooru Prompt Let's you automatically pull the tags for any saved gelbooru image.|
|https://github.com/bbc-mc/uiTweaks_txt2img_prompt-controll-btn||uiTweaks txt2img prompt controll btn|
|https://github.com/opparco/stable-diffusion-webui-two-shot||Latent Couple extension (two shot diffusion port) This let's you do regional prompting like in ComfyUI. ComfyUI examples here. This is a tool to help creating prompts for it.|
|https://github.com/toriato/stable-diffusion-webui-daam||DAAM This let's you see how your prompts ware influencing the image. Online tool to visualize and design positional prompting Is a useful tool for this extension, also there.|
|https://github.com/opparco/stable-diffusion-webui-composable-lora||Composable LoRA This extension replaces the built-in LoRA forward procedure. Works well with Latent Couple extension (two shot diffusion port).|
|https://github.com/Zuntan03/LatentCoupleHelper||LatentCoupleHelper Visual help for Extension Latent Couple extension (two shot diffusion port)|
|https://github.com/ashen-sensored/stable-diffusion-webui-two-shot/tree/feature/mask_selection||Latent Couple (two shot) with Mask Selection
Latent Couple's Mask Selection:
|https://rentry.org/xwfp2||List of styles|
|https://rentry.org/z8kzw||List of actors|
|https://rentry.org/hairstylewildcard Source||List of hairstyles|
|https://rentry.org/colorwildcard Source||List of colors|
|https://rentry.org/expwildcard Mirror Source||List of expresions|
|https://rentry.org/6uy2w||List of postures|
|https://rentry.org/k5auh||List of sexual acts|
|https://rentry.org/lots-of-emojis||List of emojis|
|https://rentry.org/qcv3e||List of emotions|
|https://rentry.org/synkf||List of character styles|
I will probably change this section to something more precise. I just want those extensions out-of-the-way, for now.
Section for similar, but external tools here.
|https://github.com/bbc-mc/sdweb-eagle-pnginfo||The one that send you image to Japanese site, to analyze it.|
|https://github.com/yfszzx/stable-diffusion-webui-images-browser||The basic one.|
|https://github.com/Vetchems/sd-civitai-browser||Civitai Browser You see the images for available models, and then you can download them directly.|
|https://github.com/bbc-mc/sdweb-eagle-transfer||Eagle transfer It is to work with external app Eagle.|
|https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients||Aesthetic Gradients Create and use them. Alternative to Textual Inversion, if you just want to add "style" to generation. Very strong but not that popular.|
|https://github.com/kohya-ss/sd-webui-additional-networks||Additional Networks This lets you use multiple LoRA models in generations.|
|https://github.com/d8ahazard/sd_dreambooth_extension||Training models. Not only dreambooth.|
|https://github.com/7eu7d7/DreamArtist-sd-webui-extension||DreamArtist Extended version of Textual Inversion.|
|https://github.com/aria1th/Hypernetwork-MonkeyPatch-Extension||Hypernetwork-MonkeyPatch-Extension Extension to train Hypernetworks.|
|https://github.com/antis0007/sd-webui-multiple-hypernetworks||Multiple Hypernetworks Extension Run multiple hypernetworks while generating.|
|https://git.mmaker.moe/mmaker/sd-webui-addnet-api||Additional Networks API Interfaces with kohya's Additional Networks to add an API layer.|
|https://github.com/KohakuBlueleaf/a1111-sd-webui-locon||LoCon extension for WebUI|
External Dataset tools and guides here.
|https://github.com/Maurdekye/training-picker||Let's you edit videos to dataset|
|https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor||Dataset tag editor|
|https://github.com/d8ahazard/sd_smartprocess||Smart Pre-Processing Extension It utilizes a combination of BLIP/CLIP and YOLOv5 to provide "smart cropping" for images. And more.|
|https://github.com/thygate/stable-diffusion-webui-depthmap-script||High Resolution Depth Maps for Stable Diffusion WebUI|
|This extension is no longer required.|
|https://github.com/dustysys/ddetailer||Face and Character detection mask|
|https://github.com/Coyote-A/ultimate-upscale-for-automatic1111||Ultimate SD Upscale extension|
|https://github.com/kabachuha/inpainting-pre-generation||Inpainting pre-generation This script is used to firstly generate an image with a separate prompt (i.e. a background image) and then inpaint it with a regular pipeline.|
|https://github.com/Symbiomatrix/regional-outpainting-webui-extension||Regional outpainting for automatic1111's webui, based on gradio.|
|https://github.com/s9roll7/img2img_for_all_method||img2img for all method|
|https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive||Hires Fix Progressive HighresFix pipline gives us an inspirable way to sketch-and-refine an image, we could make it even further|
|https://github.com/Mikulas/stable-diffusion-webui-mtg-card-art||MtG Card Art Extension|
|https://github.com/KutsuyaYuki/ABG_extension||ABG_extension (Anime Remove Background) .|
|https://github.com/klimaleksus/stable-diffusion-webui-anti-burn||Anti-burn It skips last steps of generating image. Instead of the SLIP skip, it only skips the generations of the image. Very useful if your models "over-extenuate" some of the prompts.|
|https://github.com/Mikubill/sd-webui-controlnet||ControlNet for WebUI Compatible
|https://git.mmaker.moe/mmaker/sd-webui-vae-blessup||VAE BlessUp extension Manipulate contrast and brightness of VAE|
ControlNet releted extensions
|https://github.com/fkunn1326/openpose-editor||Openpose Editor Pose making extension for ControlNet, by plaing with the doll.|
|https://github.com/jexom/sd-webui-depth-lib||Depth map library and poser Depth map quick use extension for ControlNet Used to fix hands and feet.|
|https://zhuyu1997.github.io/open-pose-editor/||open-pose-editor Web based tool to control skeleton and hands.|
|https://rentry.org/dummycontrolnet||Dummy ControlNet guide|
|https://rentry.org/poseblender||Posing in Blender for ControlNet|
|https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/||A1111 ControlNet extension - explained like you're 5|
|https://www.reddit.com/r/StableDiffusion/comments/11fuj3i/subprompts_to_region_space_latent_couple/||Subprompts to region space - Latent Couple extension for Automatic1111 WebUI (two shot diffusion)|
Controlnet composition trick
File about why Stable Diffusion 2.x is shit.
Over-aggressive censorship removed essential data for further fine-tuning and training.
|https://rentry.org/dummySD2||Dummy proof guide to get SD 2.x working|
|https://rentry.org/nai-speedrun https://rentry.org/sdg_FAQ||NovelAI "NAI Quick Start Guide" "FAQ" share the same goal|
|https://rentry.org/sdmodels||Stable Diffusion Models It is a mix of fine-tunes, deamtbooth and full models. But also easy to understand. even if dated, the main models are still there.|
|https://cyberes.github.io/stable-diffusion-models/sdmodels/||BACKUP of Stable Diffusion Models (
|https://cyberes.github.io/stable-diffusion-models/||BACKUP of Base Stable Diffusion Models|
|https://rentry.org/sdhypertextbook||SD Hypertextbook Complex guide on NAI and Stable Diffusion|
|https://rentry.org/pn7we||SD RESOURCE GOLDMINE backup|
|https://rentry.org/sd-nativeisekaitoo||Stable Diffusion Native Isekai Too|
|https://rentry.org/sdamd||Stable Diffusion AMD guide|
|https://rentry.org/ayymd-stable-diffustion-v1_4-guide||AyyMD Stable Diffuse v1.4 for Wangblows 10 (by anon)|
|https://rentry.org/cputard||CPU RETARD GUIDE (GUI)|
|https://stablediffusion.cdcruz.com/index.html||Stable Diffusion Guide By CDcruz Model List|
|https://gigazine.net/gsc_news/en/20221004-stable-diffusion-models-matome/||Various usable model data specialized in image generation AI 'Stable Diffusion' Summary Comparison of base models and conclusions|
|https://pastebin.com/FUiqfU5F||Small notepad of random anon.|
|https://rentry.org/UnofficialUnstableGuide||Unofficial Unstable Diffusion Beginner's Guide|
|https://rentry.org/NuiWaifuBible||Nui's Waifu Bible|
|https://rentry.org/animedoesnotexist||This Anime Does Not Exist|
|https://rentry.org/safetensorsguide||A guide on safetensors and how to convert .ckpt models to .safetensors directly with Voldy (AUTOMATIC1111)'s UI|
|https://rentry.org/waifu-diy-ai||/wAIfu/ DIY AI Resources|
|https://rentry.org/z4z8k||/hdg/ (/h/) main links It is being uptaded.|
|https://rentry.org/informal-training-guide||informal training guide|
|https://github.com/devilismyfriend/StableTuner||StableTuner Tool that allow for multiple types of training, but not LoRA as of yet.|
|https://github.com/kohya-ss/sd-scripts||This repository contains the scripts for: - DreamBooth training, including U-Net and Text Encoder - Fine-tuning (native training), including U-Net and Text Encoder - LoRA training - Texutl Inversion training - Image generation - Model conversion|
|https://github.com/bmaltais/kohya_ss||Kohya's GUI - Dreambooth - Finetune - Train Network - LoRA And can be used to merge LoRA.|
Sites with prompts and related help:
|https://rentry.org/artists_sd-v1-4||list of artists for SD v1.4 A-C / D-I / J-N / O-Z|
|https://rentry.org/54d9o||Collection of prompts and catbox links shared on /sdg/|
|https://rentry.org/NovelEmoji||NovelAI Diffusion Emoji Effects (WIP)|
|https://zele.st/NovelAI/||Illustrated examples of artist, styles and other in NovelAI Look example image below, under
|https://rentry.org/anime_and_titties||Big titty anon's list of artists|
|https://www.urania.ai/top-sd-artists||All 1,833 artists that are represented in the Stable Diffusion 1.4 Model|
|https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/3b61007a66d9f7c05fcce1a461d5907c1ce633dd/artists.csv||Just a core artists list from the AUTOMATIC1111's WebUI|
|https://docs.google.com/document/d/17VPu3U2qXthOpt2zWczFvf-AH6z37hxUbvEe1rJTsEc/edit||A Guide to Writing Prompts for Text-to-image AI|
|https://strikingloo.github.io/stable-diffusion-vs-dalle-2||Stable Diffusion: Prompt Guide and Examples|
|https://rentry.org/3y56t||Holomem Prompt List Preset prompts for Hololive characters|
|https://rentry.org/hdgpromptassist||/hdg/ Prompt Assist|
|https://danbooru.donmai.us/wiki_pages/tag_groups||Basic list of danbooru tags that are used for NAI, Waifu Diffusion, Hentai Diffusion and many lewd based models.|
|Big index, but is not working for me.|
|https://mega.nz/folder/MssgiRoT#enJklumlGk1KDEY_2o-ViA||Translation of Codex of Elements|
|https://jsfiddle.net/usw9jfmh/||Online tool to visualize and design positional prompting for ComfyUI there or Latent Couple extension there.|
|https://rentry.org/9keh5||Prompts for VTubers|
|https://rentry.org/SchizoNegative||Schizo's Negative prompts|
|https://rentry.org/g7ifo||Anon's Negative prompts|
|https://rentry.org/dd5up4||Anon's favorite prompts for LEWD|
|https://rentry.org/k5vdt7||Another Anon's favorite prompts for LEWD|
|https://rentry.org/negaprompt||Yet Another Anon's favorite prompts for LEWD|
|https://rentry.org/ocmue||You suck at prompting, here's how to fix it|
|https://rentry.org/ohs2t||Prompts for Nijisanji's characters|
|https://rentry.org/schizoneg||Supreme Schizo Negative Prompt|
|https://rentry.org/kemonofriendsNAIsfw||NAI's body horror issue|
Sites with image sharing, with metadata intact:
Infographic with prompts for camera angles:
NovelAI Tag Experiments:
|This is just a small example of Zele's NovelAI Tag Experiments. List of content: - Adjective - Adverb - Animals - Anime - Art movements - Art techniques - Artists (Female Caucasian) - Artists (Generic prompt) - Artists (Male Latino) - Danbooru - Copyright - Danbooru - Face Tags - Instruments - Interjection - Mythological Creatures - Noun - Photo Effect - Proposition - Verb ...and probably more coming.
Gotti's prompts tests:
|NovelAI clothes||NovelAI hair color||NovelA hair types|
|Source: twitter / pixiv||Source: pixiv||Source: pixiv|
|https://note.com/kohya_ss||Kohya's notebook on SD and AI|
|https://docs.qq.com/doc/DWFdSTHJtQWRzYk9k||Super in depth virtual book, about baking virtual art.|
Small embed files in formats of
.safetensors. Very flexible among the multiple core models. They work as a "lenses" to find the content you want.
|https://rentry.org/embeddings||list of Textual Inversion embeddings for SD|
|https://mega.nz/folder/23oAxTLD#vNH9tPQkiP1KCp72d2qINQ||Anon's Unofficial Thread/Embed Archive|
|https://cyberes.github.io/stable-diffusion-textual-inversion-models/||BACKUP of Stable Diffusion Textual Inversion Embeddings|
|https://mega.nz/folder/ZPkFhbpS#kioBVYEuUsGbJMv1qm883Q||It also has LoRA It is doubled in LoRA's links|
|One Arm Covering Breasts Two Hands Covering Breasts Arms Crossed Covering Breasts X-shaped Nipple Pasties Ball Gag Oral Runny Makeup/Mascara Side-view Deepthroat Paizuri Spitroast Threesome Bound Wrists Missionary Anal Missionary Anal Cowgirl Tentacle Arm Grab Doggystyle||Those are very good sexual embeds. All made by Corneo|
- Bo back up
!info Those are unique embedding. As they used in negative prompts to fix models. Essential!
|https://huggingface.co/datasets/Nerfgun3/bad_prompt||Bad Prompts - faces, hands|
|https://huggingface.co/NiXXerHATTER59/bad-artist clone||Bad Artist the previous link now as
|https://huggingface.co/datasets/gsdf/EasyNegative||Brings our detail and colors|
|https://huggingface.co/Xynon/models/tree/main/experimentals/TI||bad-image 9600 is suggested by anon.|
|https://rentry.org/textard||RETARD'S GUIDE TO TEXTUAL INVERSION|
|https://docs.google.com/document/d/1JvlM0phnok4pghVBAMsMq_-Z18_ip_GXvHYE0mITdFE/edit#heading=h.7jsx5wqa4gc0||Textual Inversion - Training embeddings for Stable Diffusion 2.0+ in Automatic1111 UI|
|https://rentry.org/sdstyleembed||Training a Style Embedding in Stable Diffusion with Textual Inversion|
|https://rentry.org/sd-e621-textual-inversion||Textual Inversion/Hypernetwork Guide w/ E621 Content|
|https://rentry.org/sdtextualinversion||Anon's SD textual inversion guide|
|https://rentry.org/simplified-embed-training||Randanon's Oversimplified Embed Training for Characters (Dec 30 2022)|
Those are basically the add-ons for the model. Trained on specific models. They add their finetune information to the generated image. They can be used on different models the they ware trained on, but good results are not guaranteed. Only in format of
furry_3.pt model leaked with NovelAI is by far the best. Somehow it enhances the backgrounds to be more fantastical, while also shaping the anatomy in the correct way. Not just furry.
|https://mega.nz/folder/TZ5jXYrb#-NXJo8wlmanr8ebbJ5GBBQ||Hataraki Ari hypernetwork Modules: 768, 320, 640, 1280 Hypernetwork layer structure: 1, 2, 1 Activation function: swish + dropout Layer weights initialization / normalization: none 115 images, size 512x512, manually selected from patreon gallery on sadpanda Watermarks + text manually removed or cropped out Deepbooru used for captions Hypernetwork learning rate: 5e-6:12000, 5e-7:30000, 2.5e-7:50000, 1e-7:100000|
|https://mega.nz/folder/sSACBAgC#kNiPVzRwnuzs8JClovS1Tw/folder/dSQiRZQT||Korean backup the NovelAI learning data from site:
|https://rentry.org/hypernetworks||/aids/ Hypernetwork Collection|
|https://rentry.org/naihypernetworks||Stable Diffusion Hypernetworks|
|https://rentry.org/hypernetwork4dumdums||Hypernetwork training for dummies|
|https://rentry.org/HNSpeedrun||Training based on extension
|https://civitai.com/models/4086/luisap-tutorial-hypernetwork-monkeypatch-method||[LuisaP] Tutorial Hypernetwork - Monkeypatch method|
Dreambooth is finetuning a model. It takes what is already in the models and adds more detail and "force" for that types of content be shown more often. It sacrifices other elements of the base model, in order to crate better focus. If base model lacks the information in the first place, the finetune will be shit anyway.
|https://www.reddit.com/r/StableDiffusion/comments/114dxgl/advanced_advice_for_model_training_finetuning_and/||Advanced advice for model training / fine-tuning and captioning|
Not all of them are Dreambooth, but they are still a finetune models.
|https://rentry.org/kwai||PUPPYSTYLE POV v1.4 and MISSIONARY POV v1.4|
|https://rentry.org/gapemodel||Gaping/Large Insertion model|
|https://rentry.org/pyros-sd-model||Pyro's Blowjob Model v1.0|
|https://rentry.org/pyros-pov-model||Pyro's POV Cowgirl Model|
|https://rentry.org/belle_sd_v2_5||Belle Delphine SD Model|
|https://rentry.org/GyokaiSD||gyokai - anono imoko dreambooth|
|https://rentry.org/airoticart||AIroticArt#1653 Notes Model of people lying down or "sidelyers"|
|https://cyberes.github.io/stable-diffusion-dreambooth-library/||BACKUP of Stable Diffusion DreamBooth Models|
|https://rentry.org/Unofficial_zeipher_SD_backup||Unofficial Zeipher SD backup|
|https://archive.org/details/tism-prism-AI||Tism Prism models|
huggingface or civitai model links Quality finetune models, not mixes They might not be Dreambooth, but diffidently a standalone finetunes.
|https://rentry.org/simple-db-elinas||Simple Dreambooth, by Elinas|
|https://rentry.org/db_char_training||Creating a specific character via Dreambooth|
|https://rentry.org/custom_db_waifu||Lewding Your Waifu's Specific Look via Dreambooth|
|https://rentry.org/dreambooth-shitguide||The shit guide to training with optimized dreambooth stable diffusion|
|https://github.com/victorchall/EveryDream-trainer||Every Dream trainer for Stable Diffusion|
|https://github.com/TheLastBen/fast-stable-diffusion||fast-stable-diffusion Colabs, +25-50% speed increase, AUTOMATIC1111 + DreamBooth|
|https://github.com/JoePenna/Dreambooth-Stable-Diffusion||The Repo Formerly Known As "Dreambooth"|
This is like Hypernetworks but a lot more flexible and easier to train. The main difference is, that it does not just finetune that data, when generating images. It can also add additional data, and new trigger words (prompts). In form of both
.safetensors. Less often even as
LoRA exploded so much that most informative pages are a combination of links to models, training guides and usage info.
Info for training and using
|To anons who share models: You can now embed the activation keywords and README into the .safetensors models you distribute, if you use additional_networks. This lets other anons find them easily by clicking the memo button next to the model name dropdown. You can also add a preview image in the same UI. To do that click the Additional Networks (also in section 22.214.171.124.8. Extensions for training and running) tab and find your model in the dropdown.||Source: 4chan's archived.moe|
If your LoRA model is in different format, you can try this:
Go to the Dreambooth tab.
"Create model" with the "source checkpoint" set to Stable Diffusion 1.5 ckpt.
That model will appear on the left in the "model" dropdown.
Now select your Lora model in the "Lora Model" Dropdown.
(If it doesn't exist, put your Lora PT file here: Automatic1111\stable-diffusion-webui\models\lora)
Name the model under "Custom Model Name"
Press "Generate Ckpt".
In the top left, press the purple arrow refresh button to view your newly created checkpoint.
LoRA comes in two variants, Standalone Models and Embeds Models. Embeds are more popular and commonly used. BUT standalone models also are used. Those are attached to the main models and morph it, but can be extracted to and Embed.
126.96.36.199.1. [LoRA] MEGA.NZ
There is just a lot of personal dumps of generations on mega...
Those folders contain many types of different contents. Among them might sensitive and gruesome thing. I'm not sure if I will ever catalog them. You are searching therm at your own risk.
|https://mega.nz/folder/nVkzWCjD#StcLUbCqJr4dg7gP7jL3OA||This is blank...|
For most of the guides are about training. So this section combine training guides as usage. While the next section is for direct training settings. I don't want to rename those sections, just out of consistency with previous sections. I might move some link in-between, in the future.
|https://github.com/cloneofsimo/lora||LoRA (Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning) This is the core repo, and also source for information and guidlelines.|
|https://github.com/cloneofsimo/lora/discussions/69||Using high LR + Learning rate scheduling with captions for better preservation|
|https://rentry.org/lora_train||LoRA Training Guide Multiple rentry pages|
|https://rentry.org/LazyTrainingGuide||In some way, this is an extension for upper link. Fast list of settings that should "just work".|
|https://rentry.org/2chAI_LoRA_Dreambooth_guide_english||LoRA guide rus language version available|
|https://web.archive.org/web/20230128215408/https://rentry.org/lora-tag-faq||lora training tagging faq Because rentry was moved or deleted, we now use archive.|
|https://rentry.org/59xed3||THE OTHER LORA TRAINING RENTRY|
|https://rentry.org/lora-training-science||lora training - crude science (WIP)|
|https://seesaawiki.jp/nai_ch/d/Lora%b3%d8%bd%ac%c0%ae%b2%cc||NovelAI 5ch Wiki Lora page of Japanese Stable Diffusion Wiki|
|https://rentry.org/HDGLoRaIssues||Common Issues with LoRA|
|https://civitai.com/models/7709/1990s-fantasy-oil-painting-art-style-lora-1mb-9fopas-style-training-guide-included||Style training guide|
|https://imgur.com/a/93AvCi9||Do fine-tuning with Low-Rank Adaption (LoRA) Visual Guide, Preview bellow under
|https://rentry.org/dummylora||Dummy local LoRA usage and local training setup guide (Windows, Nvidia)|
|https://rentry.org/i5ynb||[JP] LoRA Learning Memo (Lain, Yoshinaga-sensei, and others)|
|https://rentry.org/lora_logs||LoRa training log|
LoRA GUIDE 1
It was too big, so I had it split.
More like precise settings or code. Sorry for being confusing. Some links you are looking for are actually above in ####188.8.131.52. [LoRA] Guides and info.
|https://colab.research.google.com/github/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-dreambooth.ipynb||Google Colab to train|
|https://pastebin.com/3tyc5WN3||Another preset, with some explanations|
|https://rentry.org/lora-linux-troubleshooting||LoRA training on Linux|
|https://github.com/derrian-distro/LoRA_Easy_Training_Scripts||LoRA Easy Training Scripts|
|https://huggingface.co/khanon/lora-training||khanon's lora-training notes This page also contains links to LoRA models.|
|https://pastebin.com/Rnh4He2D||popup lora config example, simple|
|https://rentry.org/9vom6||derrian-distro's LoRA_Easy_Training_Scripts rentry|
|https://rentry.co/anonskohyaentrypoint||A minimal "frontend" for Kohya Useful scripts to update training scripts etc. The dark blue text is fucking hard to read. So I decided to direct link to the light mode version of rentry.|
|https://rentry.org/k2z2h||Example settings for kohya with lion|
LoCon is an Extended LoRA.
|https://github.com/KohakuBlueleaf/LoCon||LoCon Extended LoRA|
|https://github.com/KohakuBlueleaf/a1111-sd-webui-locon||LoCon extension for WebUI|
Models used to correct colors, saturation, hand and faces. Now mostly used just to fix colors of the Anything3 based models They are in formats of
.vae.pt. The 2nd format is used to make it automatically run with selected main model - by making the first part of the name, the same as the main model.
Some SD models have them build in, so in that case you can't train models and embeds on them.
|https://rentry.org/sdvae||Rentry explaining some aspects about VAE|
|https://comfyanonymous.github.io/ComfyUI_examples/area_composition/||Area Composition Examples|
|https://old.reddit.com/r/StableDiffusion/comments/10lzgze/i_figured_out_a_way_to_apply_different_prompts_to/||Precise prompt positioning. Look
|https://rentry.org/865dy||Getting Started on Paperspace for Retards <3 If you don't trust google colab for training, this helps to set up an alternative.|
|https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb||Running AUTOMATIC1111's WebUI in Google Colab.|
|https://colab.research.google.com/drive/1STL60qfoY-iSlhRb9zFETRLTqhNbznRf||Alternative of the upper one, with links and presets for specific models.|
|https://huggingface.co/spaces/stabilityai/stable-diffusion||Online Demo of
|https://huggingface.co/spaces/fffiloni/stable-diffusion-inpainting||Online Demo of
|https://huggingface.co/spaces/huggingface-projects/diffuse-the-rest||Online Demo of
|https://github.com/Linaqruf/kohya-trainer||Kohya Traine Colab notebooks for different types of training.|
|https://huggingface.co/spaces/camenduru/webui||Clone of WebUI on huggingface. Can run privately.|
|https://huggingface.co/spaces/skytnt/anime-remove-background||Anime Remove Background Works as advertised. Has a github and it's own models.|
|https://rentry.org/colab-cpu||HOW TO RUN COLAB ON CPU|
|https://console.vast.ai/create/||VASE It is one of the best GPU renting services, for training.|
|https://erorate.com/||Discord based, payed art generating service. Uncensored.|
|https://midjourney.com/||Discord based, payed art generating service. Censored.|
|https://novelai.net/||Web based, payed art generating service. Also text generating. Uncensored.|
|https://openai.com/dall-e-2/||Web based, payed art generating service. Censored.|
|https://waifus.nemusona.com/||Nemu's Waifu Generator|
|https://nijijourney.com/en/||"Let's make magic anime pictures!"|
|https://yodayo.com/||"AI art platform for vTubers and anime fans"|
|https://holara.ai/||"Create anime artwork with AI in seconds" They have some unique tech, that they keep hidden, like NovelAI tried to do.|
|https://www.astria.ai/||Not only generating images, the allow you to crate finetunes too.|
|https://dezgo.com/||Small multi model SD tool. Can use anonymously.|
|https://frosting.tv/||Frosting is a free in-browser AI generatior, no GPU needed, and no NSFW filter.|
Link in this section are directed to resources that goes into multiple types of models. Most of the time they will be a duplicates of one posted before. Because they are chaotic and seem to lose focus.
|https://mega.nz/folder/sSACBAgC#kNiPVzRwnuzs8JClovS1Tw/folder/dSQiRZQT||Korean backup the NovelAI learning data from site:
|https://rentry.org/sdg-link||This is a big database of links from /sdg/. More updated than whatever I'm doing here.|
|https://rentry.org/hdgfaq||Similar like the one before. This time based on /hdg/. LEWD focused|
|https://anonfiles.com/WfD1YbUey9/Diffusion_VR_7z||Script with examples of hooking up Stable Diffusion with VR googles, to "enhance" the environment in real time. Look
|https://mega.nz/folder/EP1mQRCQ#p2LmfEmIMBB974KNBgbMjA||Mega folder of model, hypernetwork and vae of Dartsysafe.|
|https://github.com/space-nuko/sd-webui-utilities||SD WebUI Utilities|
EXAMPLE 5 Video: https://files.catbox.moe/46qfr7.mp4
Source: Zeipher's Discord (Server REMOVED by the author)
|https://rentry.org/safeunpickle2||SAFEUNPICLE v2 STRIPS
|https://rentry.org/25i6yn||Prebuilt xformers files for based retards|
|https://github.com/mmaitre314/picklescan||Python Pickle Malware Scanner|
|https://huggingface.co/docs/hub/security-pickle||Document about Pickle Scanning|
|https://github.com/huggingface/safetensors||This repository implements a new simple format for storing tensors safely|
|https://github.com/zxix/stable-diffusion-pickle-scanner||Still very good script|
|https://ctftime.org/writeup/16723||Writeup pyshv1 by hellman / LC↯BC|
|https://github.com/diStyApps/Stable-Diffusion-Pickle-Scanner-GUI||GUI to safe scan you files Seem like the best way to scan for pickles|
|https://rentry.org/LFTBL||STABLE DIFFUSION MIXING EMPORIUM|
|https://rentry.org/sdhassan||Welcome to Hassans Page!|
|https://rentry.org/ldorm||Lazy Dump of Random Merges|
|https://rentry.org/animusmixed||animusmusics#2151 Stable Diffusions Updates|
|https://rentry.org/hdgrecipes||/hdg/ Stable Diffusion Models Cookbook|
|https://rentry.org/8nxtk||/vtai/ Models Cookbook|
|https://rentry.org/berrymix||Berrymix Old rentry, copied many times already.|
|https://rentry.org/better-anime-hands||A simple guide to make your anime model better at DRAWING HANDS This link is also in fixing models.|
|https://github.com/lodimasq/batch-checkpoint-merger||Alternative for sigmoid merging Works inside automatic1111's webui venv You can also use in as standalone:
|https://github.com/eyriewow/merge-models/||standalone merging script|
|https://github.com/diStyApps/Merge-Stable-Diffusion-models-without-distortion-gui||Separate gui to merge models. Uses scripts to make the merge more coherent. But does only simple connections of two models.|
|https://github.com/ProducerMatt/Merge-Stable-Diffusion-models-without-distortion-gui||Same as above, just partial .safetensors support added Added separately until change will be merged or one of them will be updated further.|
|https://rentry.org/gbkei||magnet for noodlemix|
|https://rentry.org/lewdsdmodels||Lewd Stable Diffusion models|
|https://gist.github.com/xrpgame/8f756f99b00b02697edcd5eec5202c59||Script that converts models to safetensors|
|https://gist.github.com/RassilonSleeps/4c9a05d76714c87c858a08685cfd6fb3||2nd Script that converts models to safetensors, the the other one cant|
|https://huggingface.co/spaces/diffusers/convert-sd-ckpt||Convert Stable Diffusion
|https://github.com/ratwithacompiler/diffusers_stablediff_conversion||unet to ckpt converting script There is a problem that makes inside directory the same name, as the main ckpt file. The easy fix is to make the script save the new file, as
|https://gist.github.com/jachiam/8a5c0b607e38fcc585168b90c686eb05||other unet to ckpt converting script|
|https://github.com/jsksxs360/bin2ckpt||bin to ckpt converting script|
|https://github.com/diStyApps/Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI||Compact GUI to convert models. Seems like the easiest way at this time.|
|https://www.reddit.com/r/StableDiffusion/comments/zyi24j/how_to_turn_any_model_into_an_inpainting_model/||How to turn any model into an inpainting model|
With dynamic bucketing being popular, cropping is not that important anymore. But if you use online servers to train your models, you might want to crop them anyway, to lower the size of files.
Internal AUTOMATIC1111's Extension for Dataset here just look a bit lower.
|https://github.com/arenatemp/sd-tagging-helper||Helper GUI for manual tagging/cropping Super good tool to help with creation of dataset|
|https://github.com/cyber-meow/anime_screenshot_pipeline||Anime Screenshot Dataset Pipeline|
|https://www.isimonbrown.co.uk/vlc-export-frames/||How to extract the frames from a video using VLC|
|https://github.com/arenatemp/sd-tagging-helper/||Helper GUI for manual tagging/cropping|
|https://rentry.org/24w8d||TaggerOnnx Z3D-E621-Convnext Based on deepdanbooru tagging model, or something.|
|https://github.com/Bionus/imgbrd-grabber||Grabber Imageboard/booru downloader which can download thousands of images from multiple boorus very easily.||How to also download tags: Source: imgur|
|https://mega.nz/file/d0gGiSRK#gZeH5VFRRhMwOlMCDh6eYVeFXM96UEXaRht3Q5hwJGo||pyra mythra dataset example|
Section for similar, but internal in AUTOMATIC1111's WebUI here.
|https://github.com/RupertAvery/DiffusionToolkit||Diffusion Toolkit This is a standalone images browser that show prompts for you. So you don't need to run WebUI everytime.|
|https://github.com/demibit/stable-toolkit||stable-toolkit (2.2.1-luna) Local image browsing, with searches for model used.|
|https://sd.mcmonkey.org/dynthresh/||To compare settings of generations.|
|https://rom1504.github.io/clip-retrieval/||To compare clip of generations.|
|https://rentry.org/FFTactics||Akihiko Yoshida styles for Stable Diffusion Guide focused on spefific style presented in games like Final Fantasy Tactics, Tactics Ogre, Bravely Default. Contains dowload link for LoRA, hypernetworks and maybe other models.|
Related section 1.3.7. Fixing models.
|https://rentry.org/clipfix||Skip/Reset CLIP position_ids FIX FIXING bad merges and models|
|https://rentry.org/better-anime-hands||A simple guide to make your anime model better at DRAWING HANDS|
AUTOMATIC1111's WebUI internal extensions of the same type here. Some of them there could work nicely with those here.
|https://rentry.org/AnimAnon||FizzleDorf's Animation Guide|
|https://rentry.org/AnimAnon-Deforum||FizzleDorf's Animation Guide - Deforum Deforum is an extension to a AUTOMATIC1111's WebUI.|
|https://rentry.org/sd-loopback-wave||Stable Diffusion Loopback Wave Script|
|https://github.com/Animator-Anon/Animator/blob/main/animation_v6.py||Animation Script v6.0 Inspired by Deforum Notebook|
|https://github.com/google-research/frame-interpolation||(GOOGLE) FILM: Frame Interpolation for Large Motion two images converted into smooth video|
|https://github.com/DiceOwl/StableDiffusionStuff||Loopback and Superimpose Mixes output of img2img with original input image at strength alpha|
|https://github.com/transpchan/Live3D-v2||Live3D v2.2 (AttNR) Create animations. Drawings to 3D models.|
|https://jitd.itch.io/pic2jelly||Pic2Jelly No AI, Just good tool to animate 2D pictures with physics .|
|https://github.com/showlab/Tune-A-Video DEMO||Tune a Video|
|https://dreamix-video-editing.github.io/ https://arxiv.org/abs/2302.01329||Just papers as of now. But interesting enough, to save them for the future.|
|https://github.com/TaoWangzj/Awesome-Face-Restoration||Deep Face Restoartion: Denoise, Super-Resolution, Deblur and Artifact Removal|
|https://github.com/Baiyuetribe/paper2gui/blob/main/README_en.md||AI desktop Application toolbox List of AI tool|
|https://github.com/hashborgir/awesome-ai||Awesome curated list of AI based generators, free and paid.|
|https://github.com/n00mkrad/cupscale||Good easy and simple Image Upscaling GUI based on ESRGAN|
|https://github.com/chaiNNer-org/chaiNNer||Good advance gui based on nodes. You need to download upscaling models on your own, but it allows to merge results and many other things.|
|https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0||Upscaling SwinIR models. SwinIR-L_x4_GAN is recommended, but other SwinIR-L should be fine too.|
|https://upscale.wiki/wiki/Model_Database||Upscaling models database Not all are compatible with webui, but they are worth time experimenting|
|https://github.com/TencentARC/GFPGAN/tree/master/experiments/pretrained_models||alternative to GFPGANv1.4.pth needs to be renamed to
|https://mega.nz/folder/qZRBmaIY#nIG8KyWFcGNTuMX_XNbJ_g||UltraSharp model folder|
|https://huggingface.co/spaces/haoheliu/audioldm-text-to-audio-generation||AudioLDM You can make all kinds of sounds for your projects|
This model generates the audio patterns in graphical forms, that later are read as auto.
|https://github.com/riffusion/riffusion||Official Riffusion's repo|
|https://github.com/enlyth/sd-webui-riffusion||In form of an extension for AUTOMATIC1111's WebUI|
|https://huggingface.co/riffusion/riffusion-model-v1||Official Riffusion model|
Like text to speech. For example: https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads
|https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice-create-voice?tabs=neural||Train your voice model Microsoft made guide on making custome voice models.|
|https://rentry.org/AI-Voice-Cloning||AI Voice Cloning for Retards and Savants|
|https://rentry.org/AIVoiceStuff||Voice AI Synthesis Guide|
Like text to speech. For example: https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads
|https://github.com/neonbjb/tortoise-tts||TorToiSe text-to-speech program: - multi-voice capabilities - realistic prosody and intonation - customizable/your own voices - Good guides on the main page - Colab compatible|
|https://git.ecker.tech/mrq/tortoise-tts||Clone of the upper and
|https://www.nexusmods.com/skyrimspecialedition/mods/44184||xVASynth 2 - SKVA Synth|
|https://www.nexusmods.com/skyrimspecialedition/mods/56778||xVADict community project - Elder Scrolls edition|
|https://www.nexusmods.com/skyrimspecialedition/mods/55605||.lip and .fuz plugin for xVASynth v2|
AI Dynamic Storytelling and other<-<-
Text generating models. For example there are some https://huggingface.co/models?pipeline_tag=text-generation
https://platform.openai.com/playground | GTP3 and other that predates ChatGPT
|https://aids.miraheze.org/wiki/Main_Page||Welcome to the AI Dynamic Storytelling Wiki!|
|https://rentry.org/aids-alts||/aids/ - Other Alternatives, Tools, and Links|
|https://rentry.org/aicg||miscellaneous for character.ai|
|https://rentry.org/aids-op||/aids/ — AI Dynamic Storytelling General|
|chat.openai.com/||ChatGPT Code, story, answers.|
|https://beta.character.ai/||Character.AI Heavily censored.|
|https://github.com/f/awesome-chatgpt-prompts||https://github.com/f/awesome-chatgpt-prompts A lot of prompts to make ChatGPT what you want|
|https://www.reddit.com/r/ChatGPT/comments/zn2zco/dan_20/||Prompt to make it a bit less restrained. It still will have censor on on most "violent" cases.|
|https://files.catbox.moe/hwwsv5.txt||Plain text for making ChatGPT writes prompts for Stable Diffusion.|
|https://stable-diffusion-art.com/chatgpt-prompt/||ChatGPT: How to generate prompts for Stable Diffusion|
|https://rentry.org/gpt-prompt-generator-230122||"script" you will feed to ChatGPT|
|https://aids.miraheze.org/wiki/Category:NovelAI||Welcome to the AI Dynamic Storytelling Wiki! NovelAI|
|https://rentry.org/DenOfSin||Den Of Sin Inside Your Mind /aids/ lorebook cards.|
|https://rentry.org/SoloAI||ESL-Anon Guide to Solo RPG with Novelai|
|https://github.com/KoboldAI/KoboldAI-Client/||KoboldAI This is a AI used to play games. You can hook SD to it, to also generate images of your adventures|
|https://github.com/TavernAI/TavernAI||TavernAI GUI popular on /aids/.|
|https://rentry.org/pygmalion-ai||Pygmalion Guide and FAQ|
|https://rentry.org/pygmalion-local||Running Pygmalion 6B locally on Linux (and on Windows)|
|https://rentry.org/pygbotprompts||Pygmalion bot prompts|
|https://rentry.org/f8peb||Full version of our post about dumping CAI logs|
|https://github.com/ebolam/KoboldAI/||fork of the above|
|https://www.reddit.com/r/KoboldAI/comments/zs8ksc/new_ui_is_released_to_united/||Reddit info 1|
|https://www.reddit.com/r/KoboldAI/comments/zrztn1/survey_results/||Reddit info 2|
|https://rentry.org/kobold8bitwindows||Anon's awful guide to kobold 8-bit on windows|
|https://github.com/facebookresearch/llama/pull/73/files||Facebook's AI leak - megnet|
|https://files.catbox.moe/o8a7xw.torrent||Facebook's AI leak - torrent|
|https://aaronsim.notion.site/Generative-AI-Database-Types-Models-Sector-URL-API-more-b5196c870594498fb1e0d979428add2d||Generative AI Database: Types, Models, Sector, URL, API & more. List of a lot of different AI model types.|
You can write suggestions to me, by writing
RentrySD-dude. Thanks to the archive, I should be able to find it.
Those are scripts for a browser.
|https://www.4chan-x.net||4chan X Browsing 4chan|
|https://gist.github.com/catboxanon/ca46eb79ce55e3216aecab49d5c7a3fb||/hdg/ catbox.moe userscript Extension to embed catbox images automatically to the post and see prompts by clicking the images with clicking with right mouse button.|
|https://rentry.org/promptcatchan||Prompt Grabber Does this:|
|https://rentry.org/promptchan||Old version of Prompt Grabber Not compatible with 4chan X|
Stable Diffusion and AI has limitless potential. That also means that some content can be a taboo. You are going tho those servers at your own risk. DON'T BE A DICK!!! Do not go there to "act offended", cause drama and attack people!!!
Don't take this as 100%. Terms of service can change and I make mistakes.
Fast = This is subjective. Based on complains of the users. At least over 5MB/s for true.
Time = Mean there is fixed time of files being deleted. That does not count sites that delete files if they are not actively downloaded.
Amount = Limit on how many times file can be downloaded.
Pass = Can set password for downloads.
Folder = Can place multiple files for one link.
Anon = Nothing is really anonymous. This is more about if login is required to upload files.
3rd = No Third Party Tracing. Anonymity plus.
Embed = If image files can be embedded/directly linked. If not true, might not even allow images.
FILE = Any file format. That means size is mostly 200-500MB. If not true, than only images.
|https://rentry.org/sdgranking||List of top AI artists on twitter|
|https://mega.nz/file/s8ZxkapS#bp4gwBUqsDaKLU1BSG1DW8hwuREaAO4Wbp8pyZoDgOk||Video converting settings for 4chan.|
|https://www.tomshardware.com/news/stable-diffusion-gpu-benchmarks||Stable Diffusion Benchmarked: Which GPU Runs AI Fastest (Updated)|
|https://rentry.org/artists-to-do||Artists to do|
|https://rentry.org/prossh||Guides I've found useful Mostly already covered links. Plus 3 magnet links.|
|https://rentry.org/sd-mashup||=USEFUL INFO= Mostly already covered links.|
|https://rentry.org/stablediffgpubuy||GPU Buyer's Guide for Stable Diffusion|
|https://rentry.org/hdgqualitycontrol||/hdg/ Quality Control A script to queue up and run a bunch of XYZ plot templates against a list of loras/embeds/hypernets.|
|https://rentry.org/WackyPoses||I Can't Believe These Aren't J◯J◯ Poses Colection of depth masks, that can be used with ControlNet, but not only.|
|https://rentry.org/remember-what-they-took-from-you||Remember What They Took From You It is about how AI text services being censored and invigilated.|
|https://hackerfm.com/index||AI news podcast|
|https://nv-tlabs.github.io/LION/||LION: Latent Point Diffusion Models for 3D Shape Generation|
|https://github.com/lucidrains/lion-pytorch||Lion - Pytorch|
Used those scripts, to save a lot of them into the
|https://files.catbox.moe/y8g20e.txt||from 18.12.2022 to 19.02.2023|
|https://files.catbox.moe/1yy1g8.txt||from 19.02.2023 to 24.02.2023|
|https://huggingface.co/JosephusCheung/ASimilarityCalculatior||ASimilarityCalculatior Usefull for merging. Compares the models to see what real gain can merge provide.|
|https://vt-idiot.github.io/crispy-octo-pancake/xyz/upscalers-0-point-7/choco/index.html||Upscalers comparison for Latent and others|
1.1. Local tools - Section of things to run art generating locally.
1.1.1. AUTOMATIC1111's WebUI - Section dedicated to AUTOMATIC1111's WebUI
184.108.40.206. AUTOMATIC1111's WebUI Extensions - Extensions for the most popular GUI at time of making.
220.127.116.11.3. Extensions for pixel art -
18.104.22.168.4. Extensions for fixing models -
22.214.171.124.5. Extensions for animations -
126.96.36.199.6. Extensions for prompting -
188.8.131.52.7. Extensions for browsing -
184.108.40.206.8. Extensions for training and running -
220.127.116.11.9. Extensions for editing images -
18.104.22.168.10. Extensions for manipulating generations -
1.1.2. Stable Diffusion - Stable Diffusion section, that also includes fully trained model based on it or separate that run on the same basis.
Older changelog notes ware moved there: https://rentry.org/RentrySD-Changelog
- Sunday, 12 March 2023
fantasy.aiis shit run by a NFT scammer. Do not sell yourself to a shitty scare-monger like that asshole. Or lose your community and integrity, if you like to...
- Thursday, 9 March 2023
VAE BlessUp extensionto 22.214.171.124.10._for_manipulating_generations
- Saturday, 4 March 2023
Latent Couple (two shot) with Mask Selectionto 126.96.36.199.6._for_prompting.
- Friday, 3 March 2023
LoCon - Extended LoRAto new (and propably temporary) 1.1.6b._LoCon.
LOCON extension for WebUIto 1.1.6b._LoCon and 188.8.131.52.8._for_training_and_running.
- Fecebook/META text AI
AI news thing,
LION: Latent Point Diffusion Models for 3D Shape Generation,
Lion - Pytorchto 12. Misc.
ASimilarityCalculatiorthat compares the models similarities, that is helpful for merging; to 12.2. Additional Tools
Upscalers comparison for Latent and othersto 12.2. Additional Tools
Composable LoRA- This extension replaces the built-in LoRA forward procedure; to 184.108.40.206.6._for_prompting.
LatentCoupleHelper- Visual help for Extension Latent Couple extension (two shot diffusion port); to 220.127.116.11.6._for_prompting.
A1111 ControlNet extension - explained like you're 5,
Subprompts to region space - Latent Couple extension for Automatic1111 WebUI (two shot diffusion)to 18.104.22.168.2._[Stable_Diffusion]_Prompting.
Advanced advice for model training / fine-tuning and captioningto 1.1.5._Dreambooth.
LoRA Block Weight mergingto 22.214.171.124.2._for_model_merging.
Pirate Diffusionto 1.2.1._Online_providers.
A1111 ControlNet extension - explained like you're 5,
Subprompts to region space - Latent Couple extension for Automatic1111 WebUI (two shot diffusion),
Depth map library and poserimages and thing moved from old ControlNet related that was in 126.96.36.199.2._[Stable_Diffusion]_Prompting to new 188.8.131.52.10.1._ControlNet_in_WebUI.
- Some fixes.
- Wednesday, 1 March 2023
- 14 LoRA mega folders to 184.108.40.206.1._[LoRA]_MEGA.NZ.
- Tuesday, 28 February 2023
- Remade my Mega links 220.127.116.11.1._[LoRA]_MEGA.NZ.
- Monday, 27 February 2023
- My 1st mega folder of re-dumping 4chan LoRA is full, so I added the 2nd one there 18.104.22.168.1._[LoRA]_MEGA.NZ.
- Sunday, 26 February 2023
TaggerOnnx Z3D-E621-Convnextto 1.3.4._Dataset_Creation.
Anon's LoRA linksto 22.214.171.124._[LoRA]_models.
/hdg/ Quality Control,
I Can't Believe These Aren't J◯J◯ Poses,
Remember What They Took From Youto 12. Misc..
derrian-distro's LoRA_Easy_Training_Scripts rentry,
A minimal "frontend" for Kohya,
Example settings for kohya with lionto 126.96.36.199.1._[LoRA]_Training.
HOW TO RUN COLAB ON CPUto 1.2._Online_tools.
/hdg/ (/h/) main linksto 188.8.131.52._[Stable_Diffusion]_Guides.
NAI's body horror issueto 184.108.40.206.6._for_prompting.
Old version of Prompt Grabberto 8._4chan.
easyupload.ioto 10._Files_sharing sites comparison.
- Friday, 24 February 2023
Just this 12.1. Links from 4chan. More links took from 4chan.
- Thursday, 23 February 2023
- Small changes to 9._Discords.
- Wednesday, 22 February 2023
- wildcard lists added to 220.127.116.11.6._for_prompting with lists of categories.
Prompts for VTubers,
Anon's Negative prompts,
Schizo's Negative prompts,
Anon's favorite prompts for LEWD,
Another Anon's favorite prompts for LEWD,
Yet Another Anon's favorite prompts for LEWD,
You suck at prompting, here's how to fix it,
Prompts for Nijisanji's characters,
Supreme Schizo Negative Promptto 18.104.22.168.2._[Stable_Diffusion]_Prompting.
Den Of Sin Inside Your Mind,
ESL-Anon Guide to Solo RPG with Novelaito 6.1.2._NovelAI_tricks_and_guides.
Nui's Waifu Bible,
This Anime Does Not Exist,
A guide on safetensors and how to convert .ckpt models to .safetensors directly with Voldy (AUTOMATIC1111)'s UI,
/wAIfu/ DIY AI Resourcesto 22.214.171.124._[Stable_Diffusion]_Guides.
/aids/ - Other Alternatives, Tools, and Links,
miscellaneous for character.ai,
/aids/ — AI Dynamic Storytelling Generalto 6._AI_Text
This is for Arch Linux/Manjaro,
HatefulSable's Guide to Installing Stable Diffusion WebUI on Ubuntu With an AMD GFX8 GPU,
HatefulSable's Guide to Installing Stable Diffusion WebUI on Ubuntu With an Nvidia GPU,
SD AMD Fix,
Stable Diffusion with AMD RX580 on Gentoo (and possibly other RX4xx and RX5xx AMD cards)to 1.1._Local_tools.
Artists to do,
Guides I've found useful,
GPU Buyer's Guide for Stable Diffusionto 12. Misc..
Full version of our post about dumping CAI logsto 6.2.1._Pygmalion.
Lorebook guideto 6.1.2._NovelAI_tricks_and_guides.
Training a Style Embedding in Stable Diffusion with Textual Inversionto 126.96.36.199._[Textual_Inversion]_Guides
- Small info. Extraction of links from 4chan (mentioned before), gave almost 400 embeds (H, TI and L). That is just catbox and some singular others. Because they have names scrambled... This will take a lot. It is only 31GB thou, so I might drop it somewhere, before cataloging is completed. But suggestions are welcome.
- Tuesday, 21 February 2023
Voice AI Synthesis Guideto 5._AI_Voice.
magnet for noodlemix,
Lewd Stable Diffusion models,
Slimy's Rentryto 188.8.131.52._Finished_Mixes.
Pygmalion Tipsto 6.2.1._Pygmalion.
A simple guide to make your anime model better at DRAWING HANDSto 1.3.7._Fixing_models now as new section.
Openpose Editorextension to help with ControlNet generations to 184.108.40.206.10._for_manipulating_generations.
Posing in Blender for ControlNetto 220.127.116.11.10._for_manipulating_generations under ControlNet related.
Dummy proof guide to get SD 2.x working,
Anatomy v1.0to 1.1.2._Stable_Diffusion.
Dummy local LoRA usage and local training setup guide (Windows, Nvidia),
[JP] LoRA Learning Memo (Lain, Yoshinaga-sensei, and others),
LoRa training logto 18.104.22.168._[LoRA]_Guides.
Fumo Diffusionto 22.214.171.124._[Dreambooth]_models.
Git revert guideto 1.1.1._AUTOMATIC1111's_WebUI.
/aids/ Hypernetwork Collectionto 126.96.36.199._[Hypernetworks]_models.
owl's modelsto 188.8.131.52._[LoRA]_models.
- 7.1.1. KoboldAI moved to 6._AI_Text and many changes there...
- There are probably bugs, but more of fixes and and links later....
Older changelog notes ware moved there: https://rentry.org/RentrySD-Changelog
I do not condone any types of models and AI creations. As long as they are NOT made on someone's expense and abuse. To create better AI network, all kind of data is needed. We are getting depressed, lonely and frustrated. Sexuality became commercialized and politicized. A lot of AI advances are based on that. It is good to have a release of sexual tensions, without harming anyone. Nothing is sacred anymore and we need this freedom - now more than ever. But still, it is not a substitute for a professional help. Find help, and take care of yourself you lovely perverts *People that willingly made themself a publicly known product, are excluded from "expense and abuse" statement. That is a "copyright" problem now, and I give no fucks about that. "Fair use of non-commercial content, based on licensed property."
-----BEGIN PGP PUBLIC KEY BLOCK-----
-----END PGP PUBLIC KEY BLOCK-----