https://rentry.org/HochiTurboTips autoscraped 11 Jul 2023 22:56 UTC
โ
!!! danger
##->๐๐ข๐ฉ๐ฌ ๐๐ง๐ ๐๐ซ๐จ๐ฆ๐ฉ๐ญ๐ฌ ๐
๐จ๐ซ ๐๐ฎ๐ซ๐๐จ<-
->([ษขแด แดแด แดแดษชษด แดแดษขแด แดแด ๊ฑแดแด ๊ฑแดแดแด แดแดสแดส ๊ฑแดแด๊ฐ๊ฐ](https://rentry.co/HochiMamaPlace))<-
***
!!! warning Looking for botmaking tips? They are here now: https://rentry.org/OnWritingCards
!!! info nuTurbo prompts status: we've done all we could here. People say they work on GPT-4 too, give them a try.
!!! note **Latest update: 11.7.2023**
[Jump to changelog](https://rentry.org/HochiTurboTips#changelog)
โ
[TOC3]
First of all - pls andersten that this is not a definitive guide. I'm merely sharing what works for me and what I happened to notice during my experience with Turbo, so don't take this as an ultimate rulebook. At this point in time we are all experimenting, figuring out many different approaches and getting the feel of little nuances, and this is why there's more than one right method of doing things ~~unless you're using W++.~~
Before you start, I strongly recommend you to read [this](https://rentry.org/oaicards) and [this](https://rentry.org/MothsBotMakingStuff) since both of those guides are excellent primers, and my methods are based on them too. Also check out [this one](https://rentry.org/TURBOSHIT).
โ
In this part of the rentry we shall take a look at temperature, top p and penalties, and try to understand how we can possibly use them for our benefit.
This is the most important setting for creativity and randomness of AI's output.
AI works as an autocomplete, that's pretty obvious. When it has to complete your input, it has a broad selection of tokens to choose from, and some are more probable than others, based on training data. Roughly, when you write "I want to eat a...", AI is more likely to return "banana" and less likely to return "brick", because according to training data people mention eating bananas way more often than eating bricks. Temperature changes those probabilities: when it's high, it evens them out, giving a boost to less probable tokens and penalizing the more probably ones; when it's low, it boosts high probabilities even more and runs low ones into the ground.
Default value of temperature is 1: at this value it lets the model use natural probabilities, without affecting them in any way. Usually it results in a fairly good amount of randomness without going completely wild. The higher above 1 you go, the more gibberish you will get, because the less probable tokens are boosted. At 2 you won't even get coherent words, because many words are equal to more than 1 token. Setting temperature to 0 will return the most probable token every time, since low probability ones are basically gone.
So, long story short: high temp=random, low temp=predictable. If you need something that's consistent rather than creative (like grammar check where you need text corrected but not rewritten), go closer to 0. If you want creativity, it's best to stay within 0.8-1.05. If you want funky schizo, crank it up higher.
Personally, the highest coherent temp value I had was 1.20, with both penalties zeroed out. Though you will most likely be properly njegged once the context fills up.
This is another way of adjusting the randomness of output. Does the same as temperature, but in a bit more predictable way: it boosts tokens by their total probability together. Value of 0 always returns only the 100% probable token, value of 0.1 considers only top 10% probable tokens and discards the rest, etc. Value of 1 is default and also turns the setting off, since the whole pool of tokens becomes available. OAI documentation recommends to use EITHER top p OR temp, but not both at the same time - you're guessing right, it's because temp messes with probabilities (in a way that's only partially predictable), and top p needs them to work. Whatever you don't use should be set to 1 to be neutralized. I personally always use temperature - it has a cute element of divine benevolence.
This setting checks how frequently words appear in your generated output (aka each individual message you receive). Every time the same token is used, it gets a penalty that lowers the chance of it being selected again. Basically, it ensures that you don't get the same words repeated too often within the same message. Value of 0 (default) disables it, positive values produce more drastic results the higher you go. OAI recommends to keep it below 1, noting that going higher can degrade the quality of output. Negative Freq penalty results in repeating phrases, words and even single symbols, and the effect is noticeable even on very moderate negatives (I got problems already after setting it to -0.2).
Unlike Freq penalty, Pres will penalize any token that has appeared in the output (once again, each individual message you receive). So it will be more aggressive towards any repeating tokens, and also will more actively introduce new ones. OAI calls it "increasing the model's likelihood to talk about new topics", but you shouldn't understand it as "being more proactive and starting shit on it's own" - rather as "choosing tokens that aren't related to the current topic and potentially derailing your chat". Value of 0 (default) disables it, higher values will make it more aggressive, lower values result in your bot literally outputting the same token all over again (I've set it to -2 once and got something like "Hi, I am am am am am am am"). However, unlike Freq, you need a really low negative value for any noticeable effect. I've tried to use negative Pres penalty to combat Turbo 16k floweriness, but didn't notice any improvements. Once again, OAI recommends to keep it below 1 to avoid ruining your outputs.
- **Penalties are cumulative.**
The longer the message that you receive, the more visible is the effect of penalties.
- **Extreme Pres can make your bot forget the surroundings and smaller details much easier**
In my chat with presence 1, a neon sign ended up being replaced with candles. However, the messages need to be really long for it to happen, my problem occured in a message that was about 400 tokens long.
- **Freq and Pres MIGHT cancel each other out.**
Now that's a pure speculation that needs to be tested, but I've noticed that when Freq and Pres are at the same level, chats are way less flowery than when one is high and other is low. So far I think that it happens this way: Freq tries to gradually bring up more uncommon tokens, but Pres penalizes them for appearing even once, thus canceling out Freq effect. Needs more practical tests.
- **High Freq and Pres on their own DON'T influence randomness.**
With temperature set to 0, no matter how high you crank up your penalties, you'll be still getting the same swipes every time, just worded differently. So always make sure your temp is above 0, unless you specifically want NO variety.
- **RPG stats may be fucked by high settings.**
With the cards that have to track certain stats and make them appear in each message, you might want to settle for lower settings in order to keep those stats from breaking or disappearing. A good UJB may help with keeping them intact too, of course, but still playing around with settings can be worth it.
- **Penalties and floweriness.**
During my tests I've noticed that higher Freq with low Pres makes text "claudesque" in a sense, bringing up more fancy words and giving a poetic flavor to the chat, but does so in moderation. High Pres with low Freq does the same, but the chat degrades to word salad noticeably faster, especially at the end of longer messages, and the whole floweriness is excessive and comical. With both penalties set to 0 you get the basic, normal human speech. However it's worth noting that I kept my temp at 1 at all times.
- **New bot new preset?**
Abso-fucking-lutely. Upping some Freq can help your more "poetic" bots keep their flowery language, while lowering it will make the more down-to-earth characters speak in a more "everyday" way. It especially matters for fuckbots: the higher Freq and/or Pres, the more prone you are to getting "shafts" instead of "cocks" and "wet entrances" instead of "pussies", since those will repeat a lot in your sex chat, and hence will be substituted first. So if you don't want your sexo too shakespearean, keep penalties low.
- **Low penalties allow for higher temperature.**
With both penalties low you can venture a bit in the schizo territory temperature-wise. As I mentioned above, I was able to get a coherent output with 1.20, but it was absolutely the highest possible temp, above that it all fell apart.
==**Warning:** these presets only apply to Turbo. From my limited time with GPT-4 it requires much more subtle approach, so please don't just apply the 1.05 temp to it unless you want it schizo. General logic of settings, however, is the same.==
**These presets work for nuTurbo and 16kTurbo too, you can safely use them.**
I have tested a few presets so far. My favorite one for now is a bit on the extreme side, but it fits my needs (I love flowery shit and don't mind editing some stuff) and stylistically works for most bots I play with. It makes text flowery and "claudesque" - in a very literal sense, you get ministrations and wild abandon with it:
>**Temperature: 1.01**
>**Frequency Penalty: 1.00**
>**Presence Penalty: 0.05**
Though, my chats are relatively short and context doesn't go to shit too fast. If you are looking for a more long-term solution, here's a more reasonable version of those values that was offered by a kind anon after some extensive testing:
>**Temperature: 0.9**
>**Frequency Penalty: 0.85**
>**Presence Penalty: 0.35**
Another preset I use is for basic chats with no extra floweriness, but with enough variety still:
>**Temperature: 1.05**
>**Frequency Penalty: 0.00**
>**Presence Penalty: 0.00**
โ
- It's best to build up some context before you start the fun part. Slowly easing the bot into sex is safer than going GIB SEXO in the first message.
- Pay attention to your wording. No need to put monks in temples, but be careful with how you word noncon and extreme fetishes, since the new Turbo, especially 16k, is hyperfocused on consent and boundaries.
- Swipe. If the reply is crappy or you're getting the "I am sorry but", swipes are your friends, there's always a huge chance that your current generation is simply a case of bad luck.
- Pay attention to the OOC leaks. This is especially important for prompt set 3, since OOC appearing in bot messages is a direct indication that you're dangerously close to being filtered and the AI is struggling with replies.
- Write better. You don't have to put out 4 paragraphs, but if you just clearly indicate what you are doing and meaning, you will get better responses than from "umm yeah ok". The ultimate sex spell "ahh ahh mistress" doesn't count, it's simply too powerful.
- Don't forget that you can gently nudge the bot into any direction you want, including tricking it into consenting to your dirty pervy shit.
!!! danger Make sure to read descriptions before you copy and paste!
Description | Main | NSFW | JB
------ | ------ | ------ | ------
**1. My old set. Still good for vanilla, SFW, mild violence. Not good for extreme stuff.** | | |
**2. New set. Tested on nuTurbo, including 16k. Works for most things, excluding rape (50/50 works when you rape, rejects when you're the one raped, gets triggered by wording, so try to write around it and not go "GRUG RAEP PUSSEH"). Occasionally filters freeuse. Generally okay for anything consensual.** | | |
**3. Another set for 16k. ==Instructions: when you see the OOC leaking into your chat, it means you're in the danger zone and are about to trigger filters, be careful with your phrasing and try to subtly reinforce consent or mask the forceful action. Don't forget to delete the OOC leaks from your messages, unless you get off on reading those. Sometimes OOC may appear randomly in the safe chats, but it's very rare.==** | | |
**4. Meaux+Hochi nuTurbo Prompt+JB ==Foreword from Meaux: It works well with unhinged shit, is very versatile and robust, works for both SFW and NSFW. Not horny by default, I'd recommend RPing organically and building context for a better experience. The intention here is to cater to freedom, for both anons and femanons. It is "ahhh ahhh mistress" friendly, it should give short/medium/long responses, swipe away and pick whatever suits your RP. The same old Turbos's hard limits are here, that is, killing, raping, medfag stuff... though it's *very* relative, give it a swipe or two, you'll eventually get the desired response. The closer to the extreme shit, the more likely it's to break formatting, output internal warnings, OOC, and even nuTurbo's own thoughts - amusing, nonetheless. Remember to delete whatever gibberish it outputs once you settle on a response. I won't go deep on how it works, why it works, it's technical and logical. I could encourage you to change stuff and play with it, be advised, though, that it's fragile and prone to breaking.==** | | Leave blank (BUT KEEP "NSFW ENCOURAGED" ACTIVE) |
โ
Many thanks to Meaux (best kitty <3) for working with me, a lot of heavy lifting is done by him.
I also recommend to check out YAnon's prompts here: https://rentry.org/YAnonTurbo
- Absolutely DO NOT try to put any message formatting guidance into your UJB. By that I mean, for example, the message format prompt used in [Antonius's](https://www.chub.ai/characters/Antonius) Touhou bots:
That will make the outputs overly rigid, because Turbo will take it literally. Believe me. I tried.
- Also, square brackets and "System Note" are absolutely unnecessary because they don't do anything.
- Trying to set the particular output length by specifying the number of words, characters or tokens mostly doesn't work. The adorable retard can't count, especially count words or characters. Even the "1 paragraph, up to 4" looks like a waste of space, and I keep it out of the sentimental value. The best way to have it write longer/shorter replies is having the greeting of desired length and editing the outputs a few times to set an example for the AI to follow.
โ
31.5.2023 - first published.
18.6.2023 - expanded the "Generation settings" section with better explanations; added prompt set 2 for nuTurbo.
20.6.2023 - added prompt set 3; added the "Basic advice" section to the Prompts; rewrote the Presence and Frequency penalties explanation because I've got some shit wrong, apologies for that. My presets still work, though.
11.7.2023 - expanded "Botmaking" section with some additional info; added some notes about negative Freq and Pres penalties; moved botmaking tips into a separate rentry: https://rentry.org/OnWritingCards.
โ
[Jump to top](https://rentry.org/HochiTurboTips#tips-and-prompts-for-turbo)