NoobAI guide

Back to Main Page

WIP:

Likely to change as the model is new. This guide works for both anime and furry art.
No I will not use JPGs. YOU WILL let this page load.

Other Resources:

Noob User Manual
Contains a lot of verified and useful data all around.

[Artist comparisons]
Jinkies
EFXL
These are applicable to most noob models.

Base Models

Noob and its merges currently come in three varieties, Epsilon, Lightning, and vpred. The lightning models primarily come from furry merges.
Contrary to what you might assume, a newer version of a model does not automatically mean it's better. These models were trained with haphazard settings and a healthy dose of YOLO.
Currently I primarily recommend merges.
I tested most of these at least a little, but your mileage may vary a lot.

Epsilon
Epsilon prediction is the default. This is the standard that most finetunes are trained for, such as PonyXL. These are robust and reliable.

Vpred
Vpred models are primarily known for their deeper color ranges, but they also adhere to your prompt more strongly. This can be good or bad, generally speaking it is commonly agreed upon that Vpred models are better, but IMO reality does not always reflect it.
In particular Noob Vpred had a really rough training arc, just like the Eps models, they fix one problem but introduce another.
That being said Noob Vpred 1.0 is very good and likely the best model IF you wrangle it through a lora, or when a decent merge arrives.

Lightning
Lightning models have the upside of generating at a much lower step count, with the downside of introducing artifacts of varying degree. This means these appeal to mostly two kinds of people, ones with lower hardware, or the ones that want to 'sketch' candidates for img2img/inpainting for other merges. Inpainting on these models themselves is also very broken and fairly inconsistent.

Note: Just because models seem to be anime focused does not mean they magically lose their Furry knowledge.

Mainline models:
Official EPS models:
Noob 1.1 (Basically updated 1.0 with the additional dataset that vpred got. Mostly the same as 1.0.)
Noob 1.0
Noob 0.75 (Middleground. Also recommended. Is better in a lot of cases compared to 1.0)
Noob 0.5 (Very schizo. Not recommended unless you want to generate really funky i2i candidates.)

EPS Merges:
PersonalMerge (The big selling point is that it doesn't have the weird noise issues of the mainline eps models. Furry specific concepts are maintained well. The default style steers more into flat and clean shading, but it's definitely malleable. Probably the most hassle free model. (Recommended for beginners))
NTRmix (Haven't tested this yet but heard good things. Major downside seems to be the name.)
Jinkies (An attempt to reverse the lightning influence of ChromaXL, while keeping the benefits of the additional Furry training. Unfortunately this did not work well. Mostly behaves like a 30step Chroma but with the same downsides as lightning. Inpainting on Jinkies is also a mess.)
EasyFluffXLpreReDo (An attempt at recreating the legacy of SD1.5 EasyFluff for the SDXL era. Did not really succeed (yet). Carries a lot of problems from Jinkies in it.)

Vpred:
1.0 ((Recommended for people who train their own loras). Vpred is more difficult to get a proper stylemix / colors on, so if you are a beginner you may wanna try an eps merge first.)
0.9r (Looks like cosplay slop from the previews, but the added data actually improves 2D quite a bit).
0.75s
0.65s

Low-Step / Fast merges:
Lightning:
ChromaXL (Classic Lodestone-ware (compliment). Comes in several flavors, the latest (Sorbet) is probably the one you want. (Recommended for people on toasters))
ZOINKS (Basically ChromaXL Spud with slightly different tone.)

Realism Models:
Note: Realism is not my thing. At most I use realism models to gen 2.5D, since they tend to introduce detailing. YMMV.
NoobReal2.1 (Sucks lole)
KFTRequiem (Actually kinda decent. Pretty nice for 2.5D. (Recommended if you want realism))
BIGLasagna (I eat, Jon. it's what I do. Pretty solid all-rounder.)

VAE:
fixFP16
Minor bugfix VAE you can use over the regular one.

Prompting NoobAi

Using the examples off of Civit is a mess. The authors either don't know how to prompt their own model, or have included submissions by people who don't know either. Take the following with a grain of salt, since a lot has been gathered from community observation. Most of these tips have been cross referenced with Noob User Manual, plus some personal observations.

These apply to all Noob models.

1: Quality tags:

(masterpiece, best quality, highres, absurd res, newest,),

(worst quality, normal quality,),

(newest) and (2024) are lesser known ones that can work well depending on your set of artist tags. I personally don't use normal quality in negative.
(very awa) is a quality tag that steers away from furry and more into anime and may not be desirable, I omit it.

2: Artist tags:

(syuro:0.8),

The noob team has contradicted themselves multiple times on what the correct syntax is. Raw seems best, no "by" or "artist:" prefixes.
Different models will require artist tags at different weights! Some want them as low as 0.2-0.3, some higher.
Artist tags can be wrapped in a [:0.4] to schedule them to apply later in the gen, this can help a lot with artist tags that influence the pose a ton.

[(syuro:0.8),(honovy:0.8):0.4],

The reverse is also true if you want the pose/shape, but not the shading.

[(dagasi:0.8)::0.4],

3: Sampler & CFG:
Noob currently vastly prefers Eulers, especially ancestral, due to problems with noise and graininess.
Mainline (30 step models/PersonalMerge):

Euler Ancestral CFG++ & Simple @ 30 steps @ 0.8-1.1 CFG

Lighting (Zoinks):

Euler Ancestral CFG++ & Beta @ 14-16 steps @ 0.8-1.0 CFG

Vpred (Noob Vpred 1.0)

EulerA & Simple & 30 steps @ 4.5-5 CFG & 0.5 CFG rescale(important).

I currently prefer reguler EulerA+CFG rescale over CFG++ for Vpred.

The "EulerA CFG++" sampler is currently implemented in reForge but not A1111. If you do not want to use reForge, you can use EulerA at 5CFG. For Vpred you would also add CFG rescaling, CFG++ already includes this in a practical sense.

Setting the CFG lower than 1.0: Open the ui-config.json in your reForge root folder. Ctrl-F and change these lines.

1
2
3
"txt2img/CFG Scale/minimum": 0.0,
"txt2img/CFG Scale/maximum": 30.0,
"txt2img/CFG Scale/step": 0.1,
1
2
3
"img2img/CFG Scale/minimum": 0.0,
"img2img/CFG Scale/maximum": 30.0,
"img2img/CFG Scale/step": 0.1,

4: Short and simple prompts are more creative
The noob models are a little overtrained. This results in a couple of effects. One of them is that longer and more explicit prompts have a tendency to make output less varied and more rigid. This is true for all AI models of course, but it is especially noticeable on noob. Two factors commonly play into this, explicit position prompts, and long prompts. Using tags such as (rear view) or (side view) may make poses very samey. A similar effect can be experienced on long prompts with ~150+ tags. Sometimes I generate my main composition by staying in one token block (so under 76 tokens), and then adding more during inpainting and upscaling.

Other useful tags:

Neg: 2koma

In negatives if you are getting unintended Manga style layouts. This also reduces the chances of multiple characters somewhat.

Pos: solo, anthro,

In positives is almost required to avoid filling the scene with multiples or humans. If you suffer from genlings (lil dudes filling your scene), then make sure solo is in there.

Neg: watermark

Probably works but I prefer to edit them out, then again my mixes never have them. (RIP Zackary mixes).

4: Other hints:
1: Later Noob models added some natural language knowledge. However this should be used sparingly and auxiliary and not as a primary way to build your prompt. You are still working with booru tagged models here.
2: The base model is Illustrious, therefore loras trained on it will work. Even Pony loras work to a smaller extent, though this is literally down to luck. Civit has an IL filter. If you are training loras then from my limited testing I would recommend training on the model you intent to use, for me this is mostly PersonalMerge. If you want to use lightning models, then you want to train on the closest Noob base model however. This is very important on Chroma for example, training directly on it and not on 1.1EPS will fry your results..
Training on the closest Noob base is also recommended if you want compatibility across models.
3: Noob is fairly competent at representing even low-tagged concepts. Just don't be surprised if your favourite Taiwanese platformer is suspiciously missing.
4: You can grab an updated autocomplete taglist csv from here. This one has parity with Noob as far as I know.

Templates:

Basegen:
These are for PersonalMerge, but essentially look similar on every 30step model. For Zoinks you just reduce the step count to 16 and change the scheduler to beta
As always, right click and save the image, go to your webUI "PNG info" tab, sent to t2i. These assume reForge is used.

Alt Tag

Img2img Upscales
0.4 Denoise | 1.75 scale
Using a jpg here for once to help with page load times.
Alt Tag

Standout details to note here, the cords at her waist, the fluff on the hand, the reflection on the pawbs.
Euler A: Standard
DPM++ 2M: Too noisy for me.
DPM++ SDE: Nice, but slower.
Using other samplers on the base gen works too, on some noob merges they tend to amplify the noise a little too much however. Personalmerge is better with this in general.

Example for Zoinks
Alt Tag

FAQ

My anatomy is fucked up bruh!

Should not be an issue on the latest sets of models. This used to be an issue on early Noob epochs. If your hands look fucked up, chances are you are doing something else wrong, usually overtagging explicit digits, IE schizo shit like (4 digits, perfect hands, flawless fingers) and (bad hands, multiple hands, liquid fingers) in negatives, or generating at bad resolutions / bad upscale settings.

Shit's grainy af dawg!

Try EulerA samplers. It's a known issue with the model in general. Merges tend to fix this, as do most well trained loras. Earlier versions of the base models also suffer more from this.

My inpaints got bigger halos than the average blue archive cunny!

EulerA's convergence gradient is far more smooth than DPM 2M, this means that lower denoises will behave stronger. Set your denoises lower. Alternatively also set the CFG slightly lower. Soft inpainting is also a toggle that is a no brainer for this stuff. Just press it, only adds a second or two to the gen time.
Of note is that lightning models, and by extension also derivatives like Jinkies, are kinda busted for inpainting. You may want to switch to a different noob model for inpainting specifically.

What's this CFG++ shit about?

It's basically a solution to the whole "low cfg is creative but washes out colors, high cfg adheres to the prompt but fries the image" problem.
It does something akin to CFG rescaling inside of the sampler, which means you also don't need to / cannot use that extension for Vpred.
Epsilon models also benefit. You are meant to use it at 1.0 and lower cfg, but that varies from sampler to sampler, and model to model.

Where's my realistic dogpussy.

A few models are popping up now that seem decent at realism. Check above in the recommended models section.

Duos? Regional Prompt? Controlnets?

Duos can be raw prompted. I don't do a lot of "OC on OC" to have bleedover issues, if I did I would inpaint.
I don't do regional prompting these days, too finnicky.
Noob specific controlnet models are here

So what is better, Noob or Pony?

Noob.

Edit
Pub: 10 Nov 2024 00:43 UTC
Edit: 13 Jan 2025 22:42 UTC
Views: 4761