Magnum 32b v2 - Public Proxy
Status: Offline. Public link (OAI endpoint. Experimental): Private link (Kobold Classic endpoint): Discord only.
The proxy along with the generated tokens will be reset if abuse or further abuse is detected.
Responses may be delayed up to one minute or more. This is being hosted by a single person :wah:
What is this
A public proxy hosting Magnum 32B v2 made by the good folks over at https://anthra.site/ & https://sillytilly.org/
While you all may be mourning the loss of 3.5 Sonnet proxies dying and having to resort to 4o again, do not fret - You can use Magnum 32B v2 ( Which can be hosted on your own computer) or through this API
Info
Max context: 30720
Output: 512-2048
How to setup (OAI)
- Go to the "Api Connections" tab.
- Open the list of available APIs and select
Chat Completion
, then selectOpenAI
as your completion source. - Paste PUBLIC proxy endpoint on "Proxy Server URL" with your token.
- Enable 'Show "External" models (provided by API)'
- Select the model called
koboldcpp/magnum-32b-v2
. - Use a clean preset (no JB needed) and follow instructions of Kobold setup (only 4 and 5).
How to setup (Kobold)
- Go to the "Api Connections" tab.
- Open the list of available APIs and select
KoboldAI Classic
. - Paste URL in the box of the page (URL at the top of the page) and connect.
- Go to "Advanced Formatting" (Tab with the symbol "A" in SillyTavern).
- Use the preset called
ChatML
in "Context Template" and "Instruct Mode", enabling the latter. - For the preset, use either
Universal Light
orUniversal Creative
. - That's all! Go and chat with your characters, shit's better then GPT 4o.
Made by Anthracite & SillyTilly
Hosted by SmileyTatsu
Proxy Status
Last updated: 8/9/2024, 4:11:51 AM (Terminated)
Local/Cloud Hosting
Section dedicated to explaining how to host your own models! By request of an anon.
I'm not a professional on local models, but I think I can help other anons host a model without having so much trouble.
Selecting the model
There are many models to choose from. The ones I can recommend are the Magnum models, which have been among the few I have tried and I can say they have decently good quality. But it all depends on whether you want to host on your PC or in the cloud and how much you would like to pay for the second option. Some model recommendations to get you started:
Magnum 12B v2 - 12 billion parameter model, small enough to be used on 12GB VRAM GPUs with the following configuration:
Magnum 32B v2 - 32 billion parameter model, it has the size to be used on 24GB VRAM GPUs with the following configuration:
Magnum 72B v1 - 72 billion parameter model, this has the same quality as its 32B v2 version. You would just use more resources but if you want to try.... It has the size to be used on 48GB VRAM GPUs with the following configuration:
These are only 3 configurations for those who (normally) would host on their own PC, 12, 24 and 48 GB VRAM if you are a millionaire. I recommend substituting the 72B model in favor of the 32B if you have 48GB VRAM to use. Why? Because you can use the 32k context that the model offers, a higher quant and still live. This is the configuration that the proxy uses to host Magnum 32b v2:
I will briefly explain what quants are soon. You can calculate how much memory you would need for X model with Y context and a certain quant using this calculator: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Local or Cloud
This decision depends on your PC and your minimal tastes. Do you have a PC powerful enough to run local models? You are on the way to host it on your PC. Do you feel that the model you can host does not satisfy you enough? Then you should go to the cloud.
Local
Advantages
- If you already have the equipment, you don't need to depend on paying third parties.
- You don't have to wait for the models to be installed each time (or pay for storage).
- No dependence on third parties.
- Completely anonymous.
Disadvantages
- Requires heavy hardware.
- Can be slower if your GPU is at the limit.
- Sometimes will require you not to do other things to avoid overloading your computer.
- Can be tedious to configure for some.
Cloud
Advantages
- You don't need to pay hundreds of dollars for a GPU.
- Usually the most affordable and simple options include templates to configure the model, making it a matter of a few clicks.
- Cheaper in the short/medium term compared to buying a GPU.
- Accessibility to switch GPUs whenever you want.
- Pay for what you use. 0.3%
- Only 0.3 USD per hour to use an A40 GPU to host Magnum 70B v1 at 20k context. Or host (the top) Magnum 32B at 32k context with extra VRAM for whatever you want.
Disadvantages
- If you don't pay for storage, you should install the model you want to use each time (usually it doesn't take long because of the speed, but it depends on your provider).
- It depends on whether the provider has a certain GPU available.
Taking the above into account you must decide what you will do. I personally have the equipment to host 12B models with a 16k context, however, I am currently paying for cloud hosting to host 32B for this proxy. To be honest if you are not sure is try and see, try local and if you don't like the quality throw some dollars at runpod.
Hosting on local
I am just going to list the steps you must do to host the model.
- Look for the model you like and can host on your computer (we will handle the GGUF versions because they are easier to explain and have support for CPU splitting, which I will not explain here but at the end of this Rentry I will list others where it is explained).
- Download the quant you need (I recommend at least a quant Q4. Anything below that decreases the quality too much. I don't recommend a Q8, it's better to use a Q6 thanks to saving space and minimal quality loss).
- Save it in a folder where you have all your models (to make it easier, save it in a folder called
KoboldCPP
and create a subfolder calledmodels
.KoboldCPP/models/your_model.gguf
). - Download KoboldCPP. Follow the instructions on https://github.com/LostRuins/koboldcpp (if you are on Windows save the .exe in the
KoboldCPP
folder created previously so we can execute commands with it. If you are a Linux user you should know what to do).
Now it depends on how you want to execute it. If you want to execute the .exe (mostly Windows users) or use CMD.
.exe method:
- Execute
koboldcpp.exe
and wait for it to start. It should open a CMD and a UI, ignore the CMD for now. - Normally, you don't need to change much unless you want to experiment. If you don't feel like it, don't touch anything (and hope it works), if you really want to know what is happening, read the official wiki https://github.com/LostRuins/koboldcpp/wiki.
- If you are not touching anything OR you are using
CuBLAS
, enableFlash Attention
on Quick Launch, Increase/Decrease the Context Size to the size you want and choose the model. If you want to share the model to other devices, enableRemote Tunnel
. Then just click launch. - After clicking launch, the UI will close. Now you need to open the previously open CMD and you will see a lot of things appearing, this is just the model loading and showing the full configuration, usually you don't care unless you are a tech. In the end, you will have something like this, these are the links for each connection (last one is for the web UI):
If you are running this for the first time, it may install Cloudflare for you.
- As a last part, just follow the #How to Setup (Kobold) section.
This section is for magnum models (at least for 72B v1 and 32B v2), other models may require a different Context Template
and Instruct Mode
settings for them to work (should be listed on main model page). Also playing with the preset settings is recommended.
Hosting on cloud
I am just going to list the steps you must do to host the model.
- Look for the model you like and can host on your computer (we will handle the GGUF versions because they are easier to explain and have support for CPU splitting, which I will not explain here but at the end of this Rentry I will list others where it is explained).
- Copy the link for the download of the quant you want to use. Right click on the download icon and copy the link. After that, save the link somewhere for later.
Now, I will explain the runpod method since this one is the easiest for me.
- Go to https://www.runpod.io/ (or use my cute referral link bleh https://runpod.io?ref=urj78syx) and create an account. Then, put some credits in there (I don't know the minimum honestly, 10 should be more than enough to try for like 30 hours (remember that you can turn it off, so this can last for a full month if you only use 1 hour per day lol) on the settings I will show, more if you use a cheaper GPU).
- Go to the pods section and click on "Deploy a Pod". You will be shown a list of GPUs, if you are doing this alone you can just choose A40, cheap 48GB VRAM GPU (cheaper than some 24GB VRAM options and more worth imo).
- Now you need to go and search for the "KoboldCpp - Official Template - Text and Image" template. Should be the only one appearing when searching for "KoboldCPP".
- Click on "Edit Template".
- Edit
Container Disk (Temporary)
for the size of your model + 10GB (I just like to give some free space in case of anything that could happen). Usually 100GB is overkill and will only take more of your cute money. - Click on "Environment Variables" and remove
KCPP_IMGMODEL
andKCPP_WHISPERMODEL
, these are just so you can do image gen or use the whisper model, unless you know what you are doing you should not have them since this will slow down the startup. - Edit the variable called
KCPP_MODEL
and replace the default value for your GGUF model link (saved previously). - You may need to edit the variable called
KCPP_ARGS
, the default value is--usecublas mmq --gpulayers 999 --contextsize 4096 --multiuser 20 --flashattention --ignoremissing
. I recommend only changing thecontextsize
to the correct one,multiuser
is the max queue that Kobold will handle before throwing errors. For more flags or knowing what a certain one does, read the wiki https://github.com/LostRuins/koboldcpp/wiki.
- Edit
- The last thing you will need to change is
Instance Pricing
. Unless you are planning to run a 24/7 service or something like that, you can't care less about the "On Demand" option, it's just more expensive. Change it to "Spot" which basically means that if the GPU is needed elsewhere, they will take it away from you. But considering you only want it for a while and also the A40's are not in demand, you'll be fine and pay less. - Just click deploy after that, you can see how much you are going to spend per hour on the bottom part.
- After clicking deploy, it will wait some seconds before redirecting you to "My Spots", open the spot that is starting, click logs, switch to "Container Logs" and click on the arrow on top (Tail Logs), this will show you the process of startup, after that, you will have something like this, these are the links for each connection (last one is for the web UI):
If you are running this for the first time, it may install Cloudflare for you.
- As a last part, just follow the #How to Setup (Kobold) section.
This section is for magnum models (at least for 72B v1 and 32B v2), other models may require a different Context Template
and Instruct Mode
settings for them to work (should be listed on main model page). Also playing with the preset settings is recommended.
More questions?
Contact me via Discord or ask on the /lmg/
thread (to be honest, this one is the better option lol, they should know a lot more than me).
Discord username: SmileyTatsu.
Cool links
- Search for the
/lmg/
orLocal Models
thread, they have a lot of cool links about locally hosting LLMs - VRAM calculator: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
- Search for models: https://huggingface.co/models
- Quantify models: https://huggingface.co/spaces/ggml-org/gguf-my-repo
- Explanation of models: https://rentry.org/lmg-spoonfeed-guide#4-models
If you would like to support me in continuing to invest time in hosting and administrating these proxies, donations are always welcome >~<