tsukasa

list of all models (with quants): https://huggingface.co/ludis

fox

https://huggingface.co/ludis/tsukasa-llama-3-70b-qlora (llama 3 70b tune)
https://huggingface.co/ludis/tsukasa-llama-3-8b-qlora (llama 3 8b tune)
https://huggingface.co/ludis/tsukasa-120b-qlora (goliath 120b tune)
https://huggingface.co/ludis/tsukasa-8x7b-qlora (mixtral 8x7b tune)
https://huggingface.co/ludis/tsukasa-7b-lora (mistral 0.1 7b tune)
https://huggingface.co/ludis/tsukasa-13b-qlora-limarp (llama2 13b tune)
https://huggingface.co/ludis/tsukasa-limarp-7b (llama2 7b tune, only this one has limarp in the name but they're all tuned on limarp)

the prompts and gen settings below aren't like concrete they differ based on what kind of character you use and if it's a group chat etc, if you find a system prompt that works nice or a UJB or some good gen settings email me. even though the model was tuned with metharme people tell me it works fine (and sometimes better depending on the card) with other prompt styles such as alpaca so ¯_(ツ)_/¯

the llama 3 tunes have different prompts/gen settings, see their readmes for info. the info below is for the mistral/mixtral/llama 2 tunes

silly tavern prompts

silly tavern context template json: https://feen.us/qdh4d2.json

story string:

<|system|>Below is an instruction that describes a task. Write a response that appropriately completes the request.

Write {{char}}'s next reply in a fictional roleplay chat between {{char}} and {{user}}.

{{#if personality}}{{char}}'s personality: {{personality}}{{/if}}

{{#if mesExamples}}This is how {{char}} should talk: {{mesExamples}}{{/if}}

Then the roleplay chat between {{char}} and {{user}} begins.

{{#if scenario}}This scenario of the conversation: {{scenario}}{{/if}}

st

agnai prompts

gaslight:

<|system|>Below is an instruction that describes a task. Write a response that appropriately completes the request.

Write {{char}}'s next reply in a fictional roleplay chat between {{#each bot}}{{.name}}, {{/each}}{{char}} and {{user}}.

{{char}}'s Persona: {{personality}}

{{#if example_dialogue}}This is how {{char}} should talk:
{{example_dialogue}}{{/if}}

Then the roleplay chat between {{#each bot}}{{.name}}, {{/each}}{{char}} and {{user}} begins.

{{#if scenario}}This scenario of the conversation: {{scenario}}{{/if}}

{{#each msg}}{{#if .isbot}}<|model|>{{/if}}{{#if .isuser}}<|user|>{{/if}}{{.name}}: {{.msg}}
{{/each}}
{{#if ujb}}<|system|>{{ujb}}{{/if}}
<|model|>{{post}}

agnai

gen settings

they're not set in stone if u find better ones email me :)

7b
temp 1 and all samplers disabled except typical p at 0.95

8x7b
temp 0.9 dynatemp_range 0.3 min_p 0.05 repetition_penalty 1.01

120b
temp 1.2 dynatemp_range 0.3 min_p 0.05 repetition_penalty 1.01

stopping strings

add <|user|>, <|system|>, and <|model|> to custom stopping strings ["<|user|>","<|model|>","<|system|>"]

agnai
st

limarp

since the last dataset the model was tuned on (limarp) includes persona's for both characters, you might get better results if you include a persona for the character (you) are roleplaying as not just the bot, also limarp data doesn't use asteriks for actions and has dialogue in quotes.

the model

trained on unfiltered instruct data then trained on pygmalions PIPPA data, then on limarp. all in metharme format (but completion in axolotl)

cards with natural language for their personas as opposed to something like W++ will give you much better outputs

contact

ludis@cock.li
logs and gen settings welcome here :)

Edit
Pub: 01 Sep 2023 21:40 UTC
Edit: 23 Apr 2024 15:32 UTC
Views: 3979