NoobAI guide
Images:
No I will not use JPGs. YOU WILL let this page load.
Other Resources:
Noob User Manual
Contains a lot of verified and useful data all around.
[Artist comparisons]
StablemondAI-1
StablemondAI-2
Basic comparison with the last meta model, PonyXL.
The big upsides compared to PonyXL:
Better backgrounds in most cases.
Better comprehension of low-tagged concepts.
Smarter, especially during inpainting.
Artist tags.
Basic gist.
NoobAI is a finetune of Illustrious-xl-early-release-v0, which in turn has been the best Anime model for a while (large funding etc). What NoobAi did was train IL0, notably by also adding the e621 dataset (furry). This resulted in a model that is more flexible than baseline IL, even more flexible than the actual 'finished' IL1.0.
NoobAI as a base model architecture mostly invalidated the other SDXL based models we had.
Recommended Merges
By now there's plenty of merges, all attempting to either bake in a specific default style, or improve base coherency in some way. The base NoobAi models are still very usable however.
NoobAi models come either in Epsilon (EPS) or Vpred forms. Both have their advantages and disadvantages. Opinions can differ of course, but it is generally accepted that vpred is superior "if done properly". In my personal opinion, Noob Vpred is "proper enough".
Vpred models require an updated forge/reForge to run, you may also want to enable an extension for CFG rescaling. ReForge has this built in.
Recommended:
StablemondAI Vpred Try this first
A good all around model. Nice neutral default look, still flexible enough for artist tags. Does not sacrifice furry knowledge.
PersonalMerge EPS
Styled slightly more skewed towards softer 2D looks compared to StableMond. A fairly early Noob merge but still very popular.
3Wolf
My favourite realism model because it doesn't sacrifice much in terms of model smarts.
Honorable mentions:
Noob EPS 1.1
Noob Vpred 1.0
Both of these still have merit. Considering the things people merge into their models is 99% of the time Anime stuff, the base models still retain the most furry knowledge. If you train loras you want to train on these.
ChromaXL
Classic lodestone-ware. A lightning model, which means it runs at 8 steps instead of the usual 30, sacrificing a bit of quality. One of the only if not the only models that did a bit of furry finetuning on top. Recommended for toasters. Note; Inpainting on lightning sucks, so switch to another EPS based model for that. If you train loras do not train on chroma, train on EPS1.1
Sloth-ware:
Chuck's FnS
Manticore
dragon_ball_z_budokai_tenkaichi_3
All of these are well made and fun to use. Stronger 2.5D style compared to Stablemond. Stronger human bias. Manticore is my fav of these probably.
RealMond
Like Stablemond but a bit more realism. Slightly higher human bias.
Not recommended:
None of these models are 'bad' because at their core they are still NoobAi. Some are misleading however, or are designed to farm Civit Buzz.
NovaFurry
The nova models do not improve on the furry knowledge of noob in any way, and if anything do the opposite by merging various anime models. These are quintessentially Civit Buzz farm models, people get baited by the word 'furry' and think they somehow have more furry knowledge, which isn't true. Attempts to look like PonyXL which is fairly popular on Civit. Use this if you like it's default style, but don't assume that it's "designed for furry stuff" and somehow better at it.
Alternative: Stablemond
Zoinks
Jinkies
EasyFluffXLpreReDo
Zoinks is based on an earlier chroma version and changed barely anything, so its somewhat pointless and superseded by a newer chroma version.
Jinkies and by extension EasyfluffXL were attempts at leveraging the extra training that ChromaXL did, but removing the lightning lora. Didn't really work too well, tends to behave like a lightning model but demands 30 steps.
Alternative: Stablemond
VAE:
fixFP16
Minor bugfix VAE you can use over the regular one.
Prompting NoobAi
Using the examples off of Civit is a mess for the base Noob models. The authors either don't know how to prompt their own model, or have included submissions by people who don't know either. Take the following with a grain of salt, since a lot has been gathered from community observation. Most of these tips have been cross referenced with the Noob User Manual, plus some personal observations.
These apply to all Noob models.
1: Quality tags:
masterpiece, best quality, highres, absurd res, newest,
worst quality, worst aesthetic,
(newest) and (2024) are lesser known ones that can work well depending on your set of artist tags. I recommend against using normal quality in the negative. There's a couple more metatags but I haven't tested them extensively.
(very awa) is a quality tag that steers away from furry and more into anime and may not be desirable, I omit it.
The set I primarily run is: masterpiece, best quality, and worst quality in the negative.
2: Artist tags:
(syuro:0.8), or (artist:syuro:0.8)
Both syntaxes were trained. In 99% of cases use the first syntax. If you encounter bleedover from something like carrot (artist), I have more luck with the weight trick: (carrot:0) (artist),
Different models will require artist tags at different weights! Some want them as low as 0.2-0.3, some higher.
Artist tags can be wrapped in a [:0.4] to schedule them to apply later in the gen, this can help a lot with artist tags that influence the composition.
[(syuro:0.8),(honovy:0.8):0.4],
The reverse is also true if you want the pose/shape, but not the shading.
[(dagasi:0.8)::0.4],
3: Sampler & CFG:
Noob currently vastly prefers Eulers, especially ancestral, due to problems with noise and graininess. Most merges can get away with other samplers.
EPS (PersonalMerge):
Euler Ancestral CFG++ & Simple @ 30 steps @ 0.8-1.1 CFG
Lighting (Chroma):
Euler Ancestral CFG++ & Beta @ 14-16 steps @ 0.8-1.0 CFG
Vpred (StableMond)
EulerA & Simple & 30 steps @ 4.5-5 CFG & 0.5 CFG rescale(important).
I currently use DPM++ 2M SDE & beta & 30 steps @ 5 CFG + 0.7CFGrescale, but your mileage may wary. My setup uses a lora, which is usually enough to fix noise issues.
Most good merges like StableMond also fix the noise issues of base noob. If you want to do grid comparisons of Sampler/Scheduler combos, I recommend doing gradient backgrounds as the noise issues are very visible there.
Setting the CFG lower than 1.0 for CFG++: Open the ui-config.json in your reForge root folder. Ctrl-F and change these lines.
4: Short and simple prompts are more creative
The noob models are a little overtrained. This results in a couple of side-effects. One of them is that longer and more explicit prompts have a tendency to make output less varied and more rigid. This is true for all AI models, but it is especially noticeable on noob. Two factors commonly play into this, explicit position prompts, and long prompts. Using tags such as (rear view) or (side view) may make poses very samey. A similar effect can be experienced on long prompts with ~150+ tags. Sometimes I generate my main composition by staying in one token block (so under 76 tokens), and then adding more during inpainting and upscaling.
Consider this as a tip moreso than dogma. Your mileage may vary.
Other useful tags:
Neg: 2koma
In negatives if you are getting unintended Manga style layouts. This also reduces the chances of multiple characters somewhat. Not a common issue on merges.
Pos: solo, anthro/furry female
In positives is almost required to avoid filling the scene with multiples or humans. If you suffer from goobers (miniature versions of your main character filling the scene), then make sure solo is in there. If you prompt (felid, domestic cat), then you often end up with goobers, try only using one species tag in that case. Or embrace the Goob.
Neg: watermark
Probably works but I prefer to edit them out, then again my mixes never have them. (RIP Zackary mixes).
4: Other hints:
1: Later Noob models added some natural language knowledge. Natural language prompting remains copium, the SDXL text encoder simply cannot make it work properly. Successes are often loaded with confirmation bias.
2: The base model is Illustrious, therefore loras trained on it will work. Even Pony loras work to a smaller extent, though this is literally down to luck. Civit has an Illustrious filter to help you find models. Edit: Now also has a noobAI filter which is somewhat annoying because most models aren't tagged correctly.
3: Noob is fairly competent at representing even low-tagged concepts. Just don't be surprised if your favorite Taiwanese platformer is suspiciously missing.
4: You can grab an updated autocomplete taglist csv from here. This one has parity with Noob as far as I know.
Templates:
Basegen:
These are for PersonalMerge, but essentially look similar on every 30step model. For Lightning models you just reduce the step count to 16 and change the scheduler to beta
As always, right click and save the image, go to your webUI "PNG info" tab, sent to t2i. These assume reForge is used.
Img2img Upscales
0.4 Denoise | 1.75 scale
Using a jpg here for once to help with page load times.
Standout details to note here, the cords at her waist, the fluff on the hand, the reflection on the pawbs.
Euler A: Standard
DPM++ 2M: Too noisy for me.
DPM++ SDE: Nice, but slower.
Changing samplers during upscaling and inpainting avoids some of the caveats with using them on your basegen on noob. Experiment.
Example for Lightning models
FAQ
My anatomy is fucked up bruh!
Should not be an issue on the latest sets of merges. This used to be an issue on early Noob epochs. If your hands look fucked up, chances are you are doing something else wrong, usually overtagging explicit digits, IE schizo shit like (4 digits, perfect hands, flawless fingers) and (bad hands, multiple hands, liquid fingers) in negatives, or generating at bad resolutions / bad upscale settings.
Shit's grainy af dawg!
Try EulerA samplers. It's a known issue with the model in general. Merges tend to fix this, as do most well trained loras. Earlier versions of the base models also suffer more from this.
My inpaints got bigger halos than the average blue archive cunny!
EulerA's convergence gradient is smoother than DPM 2M, this means that lower denoises will behave stronger. Set your denoises lower. Alternatively also set the CFG slightly lower. Soft inpainting is also a toggle that is a no brainer for this stuff. Just press it, only adds a second or two to the gen time.
Of note is that lightning models, and by extension also derivatives like Jinkies are kinda busted for inpainting. You may want to switch to a different noob model for inpainting specifically.
You may also attempt to set the resolution higher to something like 1344x1344 during inpainting. Noob models tend to stay stable and it seems to help with halos.
What's this CFG++ shit about?
It's basically a solution to the whole "low cfg is creative but washes out colors, high cfg adheres to the prompt but fries the image" problem.
It does something akin to CFG rescaling inside of the sampler, which means you also cannot use that extension for Vpred (I currently prefer rescale over cfg++ on vpred).
Epsilon models also benefit. You are meant to use it at 1.0 or lower cfg, but that varies from sampler to sampler, and model to model.
Duos? Regional Prompt? Controlnets?
Duos can be raw prompted. I don't do a lot of "OC on OC" to have bleedover issues, if I did I would inpaint.
I don't do regional prompting these days, too finnicky.
Noob specific controlnet models are here
So what is better, Noob or Pony?
Noob.
Zoomerspeak jokes are so unfunny Fluffy-chan!! Grrrrrrr this needs to be boring
v⣿⣿⠟⢹⣶⣶⣝⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⡟⢰⡌⠿⢿⣿⡾⢹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⣿⢸⣿⣤⣒⣶⣾⣳⡻⣿⣿⣿⣿⡿⢛⣯⣭⣭⣭⣽⣻⣿⣿v
v⣿⣿⢸⣿⣿⣿⣿⢿⡇⣶⡽⣿⠟⣡⣶⣾⣯⣭⣽⣟⡻⣿⣷⡽v
v⣿⣿⠸⣿⣿⣿⣿⢇⠃⣟⣷⠃⢸⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⣿⣇⢻⣿⣿⣯⣕⠧⢿⢿⣇⢯⣝⣒⣛⣯⣭⣛⣛⣣⣿⣿⣿v
v⣿⣿⣿⣌⢿⣿⣿⣿⣿⡘⣞⣿⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⣿⣿⣿⣦⠻⠿⣿⣿⣷⠈⢞⡇⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⣿⣿⣿⣿⣗⠄⢿⣿⣿⡆⡈⣽⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿v
v⣿⣿⣿⡿⣻⣽⣿⣆⠹⣿⡇⠁⣿⡼⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟v
v⠿⣛⣽⣾⣿⣿⠿⠋⠄⢻⣷⣾⣿⣧⠟⣡⣾⣿⣿⣿⣿⣿⣿⡇v
v⡟⢿⣿⡿⠋⠁⣀⡀⠄⠘⠊⣨⣽⠁⠰⣿⣿⣿⣿⣿⣿⣿⡍⠗v
v⣿⠄⠄⠄⠄⣼⣿⡗⢠⣶⣿⣿⡇⠄⠄⣿⣿⣿⣿⣿⣿⣿⣇⢠v
v⣝⠄⠄⢀⠄⢻⡟⠄⣿⣿⣿⣿⠃⠄⠄⢹⣿⣿⣿⣿⣿⣿⣿⢹v
v⣿⣿⣿⣿⣧⣄⣁⡀⠙⢿⡿⠋⠄⣸⡆⠄⠻⣿⡿⠟⢛⣩⣝⣚v
v⣿⣿⣿⣿⣿⣿⣿⣿⣦⣤⣤⣤⣾⣿⣿⣄⠄⠄⠄⣴⣿⣿⣿⣇v
v⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⣄⡀⠛⠿⣿⣫⣾