BoT: Balaur of Thought
A Silly Tavern script meant to force the square peg of common sense into the round hole of LLM-driven RP.
⭐Changes⭐ (It's gonna be a big one):
- The User manual has been updated.
- (almost) full rewrite.
- Group chat support (finally)
- /sendas and /trigger are handled properly.
- /comment, /note, /sd and their variants are handled properly.
- Databank is back in the form of long term memory for character and chat. A new [🗃️] menu was added and will be improved in tje future.
- Optional delay between generations for backends that complain about too many requests (In the [🧠] menu).
- Rethinking char message on character greeting prompts the LLM to generate a new greeting taking user persona into account.
- You can now ask the LLM (not the character) arbitrary questions about the chat.
- Rephrasing can now take arbitrary user-input as rephrasing criteria.
- Injection strings can now be viewed and edited.
4.01 bugfixes:
- BOTMKzINJECT BUG.
- /; TYPO BUG
- Typo in databank entry autogeneration prompt led to use an entirely different prompt.
- Added new initial analysis delay option under the [🧠] menu, it is off by default.
Features
- Prompts LLM to make inferences about the scene, spatial-awareness, and dialog. It also provides up to three courses of action, similar to tree-of-thought.
- Only the last batch of analyzes are injected into the context for token economy on limited context sizes.
- Analyses are available between sessions.
- Allows user to see every past analysis and prompt, the latter in a neat, color-coded format.
- Allows re-generating analyses in batch or individually, and add further one-time instructions in case the LLM is being an ass.
- For cases of extreme LLM assery analyses can be edited manually.
- In case I am being an ass, prompts can be edited bit by bit.
- Databank support for long-term memory. Duplicate and contradiction control are based on memory topic.
- Additional tools: Rephrasing, interrogation, translation capabilities testing, synching persona if it's switched during chat.
Known issues
- Every analysis is basically an extra generation, so with all analysis enabled, every char reply will take 4 generations. That is 4x the time and 4x the cost for pay-per-token plans.
- Every LLM has it's own quirks, so your mileage may vary.
- A few measures have been placed to harden script's logic and keeping it from falling apart on unexpected situations, however, ancelling stuff is still not recommended.
- Since default syntax for injections and memories is XML-based you might need to enable show tags in chat in order to see them from the view menu [👀] or you can change the syntax manually.
- Better results are achieved with good descriptions of user's persona. This affects some LLMs more than others though.
Where to get
Latest
Catbox link: BoT 4.01
Mediafire link: BoT 4.01
Old versions
BoT 3.41 Catbox • BoT 3.41 MF
BoT 3.4 Catbox • BoT 3.4 MF
BoT 3.3 Catbox • BoT3.3 MF
VoT 3.2
How to install
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6: