/aids/ Settings Repository

A quarrelsome wife is as annoying as constant dripping on a rainy day. Stopping her complaints is like trying to stop the wind or trying to hold something with greased hands.

Proverbs 27:15-16

As a way to keep track of notable alternative generation settings, this rentry exists. Be sure to check back from time to time to see what has changed, as everything here is probably going to be amended in some way often. As a rule, assume that whatever settings aren't included are left at default. While the settings here have been said to work more often than not, the most important thing to remember before using any of these is that Your Mileage May Vary. The current best strategy for GPT-J-6B models (and I'm willing to bet all subsequent models from here on out) is to adjust these settings on the fly. Treat these presets more as playable cards in your deck. It works against you to believe that any of these are "set and forget".

Finally, please don't feel compelled to change what's working for you because of the wide variety of settings presets here. That's how you become a rampant Linux distro hopper. Nobody likes a Linux distro hopper.

If it isn't broke, don't fix it.


"I don't know anything!"

The language models we're using to masturbate are fundamentally trained to predict the probability of the next word of whatever is put into it (referred to as context)

However, the model's default proclivity towards selecting the most likely token for the most recent token means that the model will eventually end up in a never ending loop. That's where sampling methods come into play, they try to maximize the coherency of output.

So now, with the help of the following resources, I'm going to try to give a rundown of the options we're working with.

Randomness

Randomness, better known as Temperature, is a sampling method in itself, and concerns dividing logits by a set temperature value before running each arbitrary real number through a SoftMax function that converts those values into probability percentages, thereby obtaining sampling probabilities.

Lower randomness values increase the confidence the model has in it's highest probability choices, while temperature values larger than one decrease that confidence in relation to the lower probability choices. In other words, the distribution of percentage values changes depending on your temperature value.

So in practice, increasing temperature makes more out there words appear in outputs, which can lead to more verbose and interesting language but can also result in a 'semantic drift' where the AI gets caught up on these unconventional terms and goes off its own merry way, or decides to go an unlikely route in a moment of uncertainty.

Temperature visualization

Lower temperatures make the model increasingly confident in its top choices, while temperatures greater than 1 decrease confidence. 0 temperature is equivalent to argmax/max likelihood, while infinite temperature corresponds to a uniform sampling.

Anecdotally, increasing temperature has been said to speed up the pace of a story and introduce more imaginative outputs, so in moments of monotony it may be wise to temporarily bump it up to keep things interesting. This is effectively what Randomness will be in your stories, a way to pick up the pace of the story or introduce avant-garde language.

Max and Min Output Length

Honestly, these two are pretty clear cut, and the hard limit NAI has of 60 means that the AI most likely won't be generating enough tokens to lose sight of what it was trying to continue. Set them to however long you want the AI's responses to be, and call it a day.

Top-K Sampling

Top-K Sampling was created because as we all know relying only on temperature to smooth things out really doesn't help as much as it frankly should.

The way this one works is it sorts all potential tokens by probability and starts to remove each token from least likely to most likely, stopping only when it reaches the kth token, kth here representing whatever you set the option to. After it shortens it to said k amount of tokens, the probability is redistributed accordingly. Effectively, Top-k tries to condense the amount of possible tokens to choose from by filtering to remove the really unlikely useless tokens.

The top-k in this instance is set to 6

Top-K is okay, but its biggest downside is that it doesn't change at all for times where there are a range of equally valid choices. Whether your sentence is "I took my dog out for a ____" or "Today, I ate a _____", Top-K applies the exact same strategy.

Utilize Top-K as an ultimate cut-off point of potential tokens that the model can choose from, it's not all that good for any other purpose than that. Anything more dynamic is found in Nucleus Sampling.

Nucleus Sampling

Nucleus Sampling (also known as Top-p Sampling) tackles the issue laid out with Top-k sampling, and it does this by working with a cumulative probability.

Whatever value you set as Nucleus Sampling is the probability percentage target that it wants to reach; it adds the probabilities of tokens and compares the results to find the smallest amount of tokens that exceeds your chosen probability value. That set then becomes the words to choose from.

Nucleus Sampling is set to 0.92

Having set p=0.92, Top-p sampling picks the minimum number of words to exceed together 92% of the probability mass.
In the first example, this included the 9 most likely words, whereas it only has to pick the top 3 words in the second example to exceed 92%.

This means that Nucleus Sampling accounts for times when the next likely tokens are obvious (the token set may be smaller) and times where there are many equally valid tokens (the token set may be larger)

Even better, this method is often used in tandem with Top-K with Top-K acting as the ultimate cut-off and Nucleus the surgically precise probability calculator to produce a set of very likely next tokens. This is why NovelAI uses both—because it really is best to use them together. You don't have to, but you probably should.


The Presets

To avoid redundancy, I've removed the presets that already exist in NAI proper. Be sure to check those out first before coming here for anything else!

Euterpe

Best Guess 2

by Baker-Anon

  • Randomness: 0.81
  • Output Length: ~160 Characters
  • Repetition Penalty: 3.1
  • Top-K Sampling: 0
  • Top-P Sampling: 1
  • Tail-Free Sampling: 0.843
  • Repetition Penalty Range: 512
  • Repetition Penalty Slope: 3.33

Hello frens!

Baker here. For your consideration, I'd like to present:

Best Guess 2

It's nothing special, really; just Best Guess settings applied to Euterpe.

But this time around, there are NO changes to the context settings.

Also, the sampling method has been changed to the TFS setting from Basic Coherency.

Optimal for keeping the story on track when you're balls deep in degenerate fetish scenes, results may vary when used in open-ended idea generation situations.

Hopefully, there's one anon out there who this works good for besides me; this is for you <3

Turpy

by Anon

  • Randomness: 0.75
  • Output Length: ~160
  • Repetition Penalty: 4
  • Top-K Sampling: 60
  • Top-P Sampling: 0.9
  • TFS Sampling: 0.7
  • Repetition Penalty Range: 2048
  • Repetition Penalty Slope: 0.27

slightly modified Co-writer has been my goto

Storyteller

by ???

  • Randomness: 0.72
  • Output Length: ~160
  • Repetition Penalty: 2.75
  • Top-P Sampling: 0.725
  • Repetition Penalty Range: 2048
  • Repetition Penalty Slope: 0.18

It's Storywriter (Sigurd's default preset), except top-p happens before temperature.

Sphinx Moth v2

by Nyks

  • Randomness: 2.5
  • Output Length: ~160 characters
  • Repetition Penalty: 2.2
  • Top-K Sampling: 300
  • Nucleus Sampling: 0.5
  • Repetition Penalty Range: 512

Experimental - uses custom order of Top-K, Nucleus and Temperature

Reborn from its sandy pit, Sphinx rises again with all of its max randomness glory.

Sphinx Moth is now better than ever, picking out the best tokens and giving them equal chance of being chosen. Truly harnessing the creativity of high ends of Randomness, you can expect a wide array of creativity in a way that is still written with prose that makes sense!

Be ready to wrangle with this beast, for it may avoid a detail or two in place of a more creative route.

Monkey Business

by Belverk

  • Randomness: 1.2
  • Output Length: ~144 (set this anywhere you prefer)
  • Repetition Penalty: 2.8
  • Top-K Sampling: 200
  • Tail-Free Sampling: 0.97
  • Repetition Penalty Range: 2048
  • Repetition Penalty Slope: 0.18
  • TFS applied first, then randomness/temperature

Model agnostic preset done using the token probabilities viewer, debug options and some Sage advice from OccultSage, finetuned for my personal preferences. Tokens that have a lower likelihood than 2% of appearing get mostly culled, while tokens in the range of 94% and above likelihood get bumped to 100%. This behavior is familiar to everyone who has used 0.992 TFS presets before, although works better after adjustments and the TFS overhaul.

With an increased randomness applied after filtering this should give users a consistent and natural, yet creative output experience. Rep penalty curve returns, biased towards my scaffolding method of using lorebook entries, with emphasis on attempting to preserve accurate output of colors.

Why is it named Monkey Business? I made the preset on a whim and tested it on the Monkey World Domination prompt, which proved very useful for testing token logprobs and filtering. Name aside, it's a serious preset and the evolution of my tweaks on Sage's coherent creativity. Currently I am using Sigurd, but I've done testing on early Euterpe, and TFS works the same way for both. Euterpe is more insistent on what tokens come next, so if you want more creativity and plot twists while keeping the same TFS, feel free to bump the randomness up.

Damn Decent TFS

by chmod007

  • Randomness: 0.9
  • Output Length: ~240 Characters
  • Repetition Penalty: 4.25
  • Top-K Sampling: 25
  • Nucleus Sampling: 0.35
  • Tail-Free Sampling: 0.95
  • Repetition Penalty Range: 2048
  • Repetition Penalty Slope: 1.72

A generation configuration focused on a subjective model-specific sweet spot.

Generation settings calibrated using New Story defaults with No Module.

Order: top_k, top_p, tfs, temperature

Sigurd

The Old Familiar

Randomness: 0.8
Top-K Sampling: 50
Nucleus Sampling: 0.9
Repetition Penalty: 2
Repetition Penalty Range: 512
Repetition Penalty Slope: Disabled

Optimal Machine

by lion

A variant of Belverk's Optimal Whitepaper v2 that I like to use in many of my generator scenarios. Tends to work well for generating content based off of examples high up in context.

  • Randomness: 0.8
  • Top-K Sampling: Disabled
  • Nucleus Sampling: 0.75
  • Repetition Penalty: 3.25
  • Repetition Penalty Range: 1024
  • Repetition Penalty Slope: 6.57

Fated Outcome

by Pause

This Preset will always return the same output until something is changed in the Context, allowing a sense of permanence and fate within the world of your narrative.

Fun cases with this Preset include "time warping" to see how a character would have reacted if you said or did something different, and testing the effects of different token associations on the flow of a Story. Additionally, lore details and names should have a significantly higher chance of being correct.

NOTE: This Preset makes the Retry button useless while active, as the same output will always be returned until something changes.

  • Randomness: 0.1
  • Max Output Length: 400
  • Repetition Penalty: 2.4
  • Top-K Sampling: 1
  • Nucleus Sampling: 0.1
  • Tail-Free Sampling: off
  • Repetition Penalty Range: off
  • Repetition Penalty Slope: off

Pussy Tentacles (Jeral V4)

by HydroStorm

  • Randomness: 0.5
  • Max Output Length: 160
  • Repetition Penalty: 4.075
  • Tail-Free Sampling: 0.643
  • Repetition Penalty Range: off
  • Repetition Penalty Slope: off

Damn Decent TFS [Sigurd V4]

by chmod007

A generation configuration focused on a subjective model-specific sweet spot.

Generation settings calibrated using New Story defaults with No Module.
Lorebook, token, and context settings are pristine.

  • Randomness: 0.22
  • Output Length: ~240 characters
  • Repetition Penalty: 3.1
  • Tail-Free Sampling: 0.74
  • Repetition Penalty Range: 2048
  • Repetition Penatly Slope: 2.16

Complex

by Orion

Been getting really good results with these settings, based off of the "Complex Readability Grade" posted in Basileus' findings in #novelai-research. With good usage of Tone, Word Choice and maybe Author in the Author's Notes, as well as a decent amount of context for the AI to consider after starting a story, you can get some stunningly evocative prose while the story's progression remains pretty consistent. Of course, it will still need guidance from time to time, and it might require some slight adjustments to Randomness and TFS occasionally based on your preferences, but I think this is going to be my go-to for serious stories until somebody finds a setting preset that's even better than this.

Randomness: 0.83
Tail-Free Sampling: 0.674
Repetition Penalty: 3.5
Repetition Penalty Range: 1024
Repetition Penalty Slope: 5.13


The Scaffold

Yes, another thing yanked from the community research discord.

Remember Notes++? Remember how effective it was to be able to stagger information through context? Fortunately, NAI provides enough options for users to recreate such a feature, and as a result, a few mongaloids discord users have developed a general rule of thumb for proper Lorebook placement. It's best to view this as inspiration rather than gospel, and it was first conjured during Sigurd V2's reign of terror, but it's worth a look.

The scaffold was designed by OPVAM's and is meant to be a guideline on what advanced settings you should set for each lore entry (and advanced context settings).
What the scaffold is trying to achieve is to insert the lore as close to the front/bottom of context as possible, but still allow the story some "breathing room" in between the lore.

TravellingRobot

Position Insertion Order Reserve Tokens Type
-12 -100 200 Memory
-10 -200 100 Lore: Concepts
-10 -300 100 Lore: Places
-10 -400 100 Lore: Races
-10 -500 100 Lore: Factions
-8 -600 200 Lore: Author's Note (Impromptu Lorebook Entry)
-6 -700 100 Lore: Characters
-4 -1000 - Lore: *** (Forced Activation)
-4 -800 200 Author's Note - The 'Real' One
0 0 512 Story

Note on token limit (ght901)

Just as a note for people using these settings. If you don't keep the tokens used by that Lorebook entry under the number of reserved tokens it will be almost guaranteed (there's a small chance the first might not) to be trimmed.


This rentry is designed and updated with Sigurd in mind.

Edit
Pub: 02 Jul 2021 22:35 UTC
Edit: 22 Jan 2022 17:19 UTC
Views: 56157