The Ask Max System
This page is now deprecated. It has been moved to my shiny new website here, and all updates will happen there from now (8/31/23) on.
Since the Mythomax language model came out, for many it's swept out all other contenders for best RP model in a way that hasn't happened since, idk, probably Pygmalion. The excellent thing about this consensus is that it means most everyone is playing under the same conditions--which opens up exciting possibilities. In an environment of very diverse model choices, it's hard to give advice or do experiments that will be useful to a wide variety of people: a prompt that works for Airochronos may be iffy on Chronos-Hermes, so your painstakingly crafted prompt may only be useful for Airochronos users--and even they may find themselves out in the cold if they decide to toy with a different model. But if we're all using the same thing, well then! That's different.
Large language models are not, in general, good at writing prompts for themselves or describing how they would actually respond to a prompt: they don't have that kind of self-awareness. What they are good at is describing what a given concept means to them. I've had a surprising amount of success with simply giving Max a draft prompt and asking how it would interpret it.
Don't mistake this for Max describing what it would actually do in practice.
The trick is simply to look at the words and concepts it uses in its description to get a general sense of what the prompt means to Max in context. You're not looking for specifics: you're just exploring its headspace and noting down general associations.
The Prompt
Explanation
The problem I ran into when trying to get this prompt to work is that Max really REALLY likes telling stories, so if you show it a draft command about storytelling and ask what it thinks, it won't tell you what it thinks, it'll just try to follow the command--and spew out a very interesting story that is not at all what you were going for. I found that reminding it above and below that the actual instruction is to give its opinions was very helpful, but that giving it clear markers as to where the draft command begins and ends was key. If you want Max to follow or not follow something, you need to tell it exactly how to tell what is what.
Methodology and Usage
TODO: write this lmao
The Takeaways
When prompting Max, less is more.
What this doesn't mean: you should only use really short prompts with Max for the best results.
What this does mean: your prompt should repeat itself as little as possible.
Here is an example of a draft system prompt I investigated, which I had already stripped down aggressively:
This had good results, but not much better than the results I got with the non-stripped down version, and the characters it came up with were not actually very complex at all: they were essentially stereotypes, lacking interority and even names. They were more similar to fairy tale characters than anything. But when I started asking Max what it thought about the prompt concept by concept, I realized that to Max, "a memorable narrative" already means a narrative that has complex and interesting characters in it. So I tried stripping that part out:
The characters that resulted from that prompt had names, motivations, thoughts, and relationships--all things the previous version had lacked. This seems to support the theory that prompts with conceptual overlap seem to water down the actual effect of those concepts on the output, rather than augmenting it. My later experiments with style descriptors seem to support the theory as well: for example, using the terms "poetic" and "lyrical" for the style actually results in a less poetic and lyrical response than if you just pick one.
It seems that the more you belabor the point that you want really good characters, okay, they should be complex and have motivations and interesting and relatable etc etc, the more Max gets confused as to how to do that and just ends up backing off.
Max knows what a good story looks like
This follow from the previous example. Here's the full version of the system prompt that I was working on stripping down, via asking Max to interpret it concept by concept:
This got reasonably good results, but is quite token heavy and repetitive, and simply asking Max to strip it down on its own got nowhere (remember, LLMs are not good at writing prompts for themselves).
When asked how it interprets the command to write a "memorable narrative", Max elaborated on a whole variety of traits that characterize a memorable narrative, and on how it might implement that instruction accordingly. Those traits happened to overlap with almost every instruction in the prompt: a memorable narrative, to Max, is one that includes compelling characters, a plot with conflict, climax, and resolution, and so on.
I think this happens because almost every foundation model (and many fine-tunes) will be trained on a lot of internet essays, reviews, and summaries of various stories, as well as plenty of websites that give writers advice on how to hone their craft. Max has plenty of knowledge to draw on for what constitutes a good story. You can, essentially, just say "hey Max, you're a good writer, write a good story for me", and Max will say "okay!" and do it. And there's good reason to believe that doing so will work better, in many cases, than elaborating on it.