RANDOMIX Preset for AI Role-Play in SillyTavern (Chat Completion)
Derived from "[REMIX]GeminiClaude-1302"
Recommended for GLM 4.6/5.1 (In English, mostly) or Deepseek-chat (Both Russian and English)"
📥 Download
⬇️ Download Preset Version: 2304 | April 2026
- For now, can only shave via Rentry; can't access Catbox, can't (and don't want to) use a phone number to verify a new account somewhere nor a Google account (which I don't have)
📋 Overview
Features:
- Procedural generation of the main instruction (and some other parts) with each request, for variability and non-repeating input tokens.
- The "Transition" lines are being removed automatically, using the regex, but are being sent to the AI within the last two messages. (Their presence encourages the proactivity by the AI.)
- Three narrators (optional, choose one), each with its own prefill. Memoria is nostalgic and "art house"-like; Tokki is cheerful and lively with contemporary style; Lustris is optimal for hentai-like NSFW stories.

- "Custom reasoning" feature, where the <think> tags are altered to give it a certain flavor (e.g. 'lewd_think', the prefix is taken randomly from an array of options). With some models, the altered tag may improve the reasoning and the following output in Creative Role-Play.
- If the current model can't have the beginning of a Reasoning being pre-filled, then choose 'Prefill+User', turning on the first in the pair, for a more simple type of reasoning in prefill. If the current model can't also have the latest message attributed to 'assistant' (e.g. Claude 4.7 at AWS Bedrock), then also turn the second half on (it's empty, but has a 'special space' inside technically and counts as User's message).
- No "infoblock" and no "CYOA" by default - only the "pure" RP experience (as it should be, when the LLM is smart enough to work with no crutches); the respective modules can still be added from the inactive prompts.
- The prompt is attributed to "user" entirely (no "system" messages), same as in the [REMIX] preset; however it's supposed to prevent safety filter on Google API, and may be unnecessary for other models.
✨ Why try RANDOMIX?
- The randomized prompts per each swipe provides a fresh experience: a loop is less likely, swipes are more varied etc.
- The "Narrator" feature adheres to the common prompt engineering technique, when it's better to assign a certain role (identity) to an AI, than just tell it "what to do" (it was proved with comparisons on various benchmarks, with higher score when an LLM was prompted with a "professional's role").
- The prefill, especially with a Narrator, may enrich the style and tone of responses. Depending of a model, the presence of a Narrator may also affect the model's reasoning.
- Some features were taken from Geechan's preset for GLM 4.6 and EveningTruth for GLM 4.6, after it was tested by the author with various open-source China-made models. [The preset weren't tested with any version of Claude, and isn't guaranteed to work with Gemini 3.0/3.1 (gives refusals and stops mid-stream often).]
- The {{char}} macro is omitted, so the preset is suitable for any kind of RP (text simulation), whether with a single pre-made character or with something else. The instructions makes the emphasis on the proactivity, to develop the story plot whether with active participation of the User or without.
❓ Troubleshooting
Q: This preset makes me a cuck in my own RP chat!
A: Check the Preset Manager. Or leave unchecked and use as-is, if you're... well, you got the idea.
Q: Responses are too short/long
A: 1) Add a "Length" instruction from the Prompt Modules in the dropdown menu above the Manager; 2) Regulate the length by trimming the responses manually (the best solution, actually); 3) Involve stop tokens. To simply reduce the "output length" works too, but leave enough of tokens for the model's reasoning. The responses length may depend of the current card (the definitions), not just the preset.
Q: The reasoning is staying inside the message!
A: Turn the regex for 'custom <think>' on, it will match everything from the visible beginning of a message to the closing tag, hiding it automatically. Keep in mind some models aren't supporting prefill (for instance, GLM can't "see" the prefilled Assistant message at official API).
Q: Does this preset works with NoAss?
A: No idea; I don't know what NoAss is nor what makes it different. The author of this preset prefers the local (open source) AI models.
Q: Is Text Completion version of this preset planned?
A: Nope, because it is considered a deprecated format, and the online-hosted models doesn't usually support it for API requests (the old GPT 3.5 is the only exception).