NoobAI guide

Back to Main Page

WIP:

Likely to change as the model is new. This guide works for both anime and furry art.
No I will not use JPGs. YOU WILL let this page load.

Final Release and guide Updates

This guide is slightly outdated with the new 0.75S and 0.9R Noob models out now, plus things like NTRmix being popular. I'm basically just waiting for 1.0 to drop and update the guide then.

Update 01/12/2024

Swapped the main recommended model to PersonalMerge. Probably performs better for most users.

Update 18/11/2024

Another kind anon made these artist comparisons
https://rentry.org/jinkies-doomp
https://rentry.org/efxlpre-doomp2

Base Models

Noob and its merges currently come in three varieties, Epsilon, Lightning, and vpred. The lightning models primarily come from furry merges. Vpred is still in training for the mainline model.
Epsilon Noob is a bit of a mess. Due to an overtrained text encoder (at least that's what people currently believe), the 1.0 model does not strictly perform better in every aspect compared to 0.5 or 0.75. China and the anime genners blame the introduction of the furry dataset after 0.5, for bringing in 30000 images of diaper porn. Funny.
I tested most of these at least a little but your mileage may vary.

Mainline models:
Epsilon:
Noob 1.1 (Basically updating the epsilon model with the additional dataset that vpred got. Subjectively worse.)
Noob 1.0 (Recommended)
Noob 0.75 (Middleground. Also recommended. Is better in a lot of cases compared to 0.75)
Noob 0.5 (Very schizo. Not recommended)

Merges:
PersonalMerge (The big selling point is that it doesn't have the weird noise issues. The downside is that the default look may be slightly too anime biased for your taste, some furry specific concepts and or artist tags may also not perform as well. Probably the most hassle free model. (Recommended))
Jinkies (Basically a 30 step ChromaXL.)
EasyFluffXLpreReDo (The EasyFluffes are attempts at bringing natural language into the model. Kind of made irrelevant by newer noobs having native natural language capability.)

Low-Step / Fast merges:
Lightning:
ZOINKS (Basically ChromaXL Spud with slightly nicer tone. (Recommended) if you want a lightning model)
ChromaXL (Classic Lodestone-ware (compliment). Currently comes in Mango and Spud flavors, don't ask me which is better.)

Vpred:
Currently in training. Some merges have tried to merge the in-progress Noob Vpred into 1.0, but they don't seem to improve much yet.
Running Vpred in reForge:
Update: These should get auto detected and set properly on the latest reForge versions. If not, use "Advanced Model Sampling" and tick both v_prediction and Zero SNR.
EulerA CFG++ seems to work well with these models too, just switch the scheduler to 'normal', beta is broken for this specific one.
0.65s (Recommended if you wanna check vpred out)

VAE:
fixFP16
Minor bugfix VAE you can use over the regular one.

Prompting NoobAi

Using the examples off of Civit is a mess. The authors either don't know how to prompt their own model, or have included submissions by people who don't know either. Take the following with a grain of salt, since a lot has been gathered from community observation.

These apply to all Noob models.

1: Quality tags:

(masterpiece, best quality, newest, 2024,),

(worst quality, normal quality,),

Newest and 2024 are lesser known ones that can work well depending on your set of artist tags. I personally don't use normal quality in negative.
The vpred models have added (very awa). Don't use it, does barely anything.

2: Artist tags:

(syuro:0.8), or (artist:syuro:0.8),

Update: It was clarified that both of these syntaxes were trained and both should work.
Different models will require artist tags at different weights! Some want them as low as 0.2-0.3, some higher.
Artist tags can be wrapped in a [:0.4] to schedule them to apply later in the gen, this can help a lot with artist tags that influence the pose a ton.

[(syuro:0.8),(honovy:0.8):0.4],

3: Sampler & CFG:
Noob currently vastly prefers Eulers, especially ancestral, due to problems with noise and graininess.
Mainline (30 step models/PersonalMerge):

Euler Ancestral CFG++ & Normal @ 30 steps @ 0.8-1.0 CFG

Lighting (Zoinks):

Euler Ancestral CFG++ & Beta @ 14-16 steps @ 0.8-1.0 CFG

The "EulerA CFG++" sampler is currently implemented in reForge but not A1111. If you do not want to use reForge, you can use EulerA at 5CFG.

Setting the CFG lower than 1.0: Open the ui-config.json in your reForge root folder. Ctrl-F and change to this.

1
2
3
"txt2img/CFG Scale/minimum": 0.0,
"txt2img/CFG Scale/maximum": 30.0,
"txt2img/CFG Scale/step": 0.1,
1
2
3
"img2img/CFG Scale/minimum": 0.0,
"img2img/CFG Scale/maximum": 30.0,
"img2img/CFG Scale/step": 0.1,

Other useful tags:

4koma

In negatives if you are getting unintended Manga style layouts.

solo, anthro,

In positives is almost required to avoid filling the scene with multiples or humans.

watermark

Probably works but I prefer to edit them out, then again my mixes never have them. (RIP Zackary posters)

4: Other hints:
1: There is 0 natural language prompting in the base model (Update: Vpred is adding some). This extends to even smaller tag associations like "green hair tips" not always working as well as on other models.
2: The base model is Illustrious, therefore loras trained on it will work. Even Pony loras work to a smaller extent. Civit has an IL filter. However if you are training loras yourself it is currently agreed that training on Noob 1.0 works best. I have no source for this. Training on the Vpred models is highly discouraged due to a mixture of needing very specific settings, and the info for that being hard to find. I still see a lot of conflicting info.
3: Noob is fairly competent at representing even low-tagged concepts. Just don't be surprised if your favourite Taiwanese platformer is suspiciously missing.
4: You can grab an updated autocomplete taglist csv from here
5: You can generate bases on Noob, and then refine them in Pony if you have a setup you like. I recommend doing a 1scale pass @ 0.4 denoise, and then your upscale @ 0.3 denoise. Seems to work for me. I partially do it for style, but also to get rid of the grain/blur that Noob tends to have.
6: Pony loras work better than they should.
7: Inpainting can be a pain, especially if you are used to Pony inpainting. A few notes; First we usually use EulerA based samplers in Noob, which means we want lower denoises than what we are used to. Second, the model is highly sensitive to low context, so you may need to up your padding a lot, or do the "pixel trick" where you place a mask pixel on the nose to enlargen the context towards it. Third, it sometimes just feels "buggy". Trying to find out why, occasionally it will distort colors completely when inpainting.

Templates:

Basegen:
These are for PersonalMerge, but essentially look similar on every 30step model. For Zoinks you just reduce the step count to 16 and change the scheduler to beta
As always, right click and save the image, go to your webUI "PNG info" tab, sent to t2i. These assume reForge is used.

Alt Tag

Img2img Upscales
0.4 Denoise | 1.75 scale
Using a jpg here for once to help with page load times.
Alt Tag

Standout details to note here, the cords at her waist, the fluff on the hand, the reflection on the pawbs.
Euler A: Standard
DPM++ 2M: Too noisy for me.
DPM++ SDE: Nice, but slower.
Using other samplers on the base gen works too, on some noob merges they tend to amplify the noise a little too much however. Personalmerge is better with this in general.

Example for Zoinks
Alt Tag

FAQ

My anatomy is fucked up bruh!

Less of an issue on Zoinks and Jinkies. Remember Noob Really likes following your prompt. If you tell it (ass up, handholding), then your waifu will become a contortionist.

Shit's grainy af dawg!

Try eulerA samplers. It's a known issue with the model in general. They did close to 0 dataset pruning, we ended up with more jpgs in the training data than Pony. Alternatively use any lora for style. For some reason it seems to fix some of the fucked up noise scheduling.

My inpaints got bigger halos than the average blue archive cunny!

EulerA's convergence gradient is far more smooth than DPM 2M, that means that lower denoises will behave stronger. Set your denoises lower. Alternatively set the CFG slightly lower. Soft inpainting is also a toggle that is a no brainer for this stuff. Just press it, only adds a second or two to the gen time. The model also needs higher context, so up the padding.

What's this CFG++ shit about?

It's basically a solution to the whole "low cfg is creative but washes out colors, high cfg adheres to the prompt but fries the image" problem.
It does something akin to cfg rescaling inside of the sampler, that means you don't need to use any rescale extension for vpred, it also improves regular model image quality. You are meant to use it at 1.0 and lower cfg, but that varies from sampler to sampler.

Where's my realistic dogpussy.

I highly doubt there will be a competent realism model anytime soon. Noob and Pony are both averse to this sorta stuff, noob even more than Pony was. Update: I tried 'NoobReal' and it sucks ass.

Duos? Regional Prompt? Controlnets?

Duos can be raw prompted. I don't do a lot of "OC on OC" to have bleedover issues, if I did I would inpaint.
I don't do regional prompting these days, too finnicky.
Controlnets supposedly kinda work. The Noob team said they are training Controlnet models for everything, so it's coming at least.

So what is better, Noob or Pony?

Pony intentionally made itself dumber. Noob accidentally made itself dumber. Pony worked out at the library, Noob is the tard that can rip your arms off. (The answer is Noob).

Edit
Pub: 10 Nov 2024 00:43 UTC
Edit: 17 Dec 2024 14:33 UTC
Views: 2340