chat-ui | uncensored | local | with websearch
run uncensored local llms with web search
install ollama
https://ollama.ai/
install and configure hf chat-ui
git clone https://github.com/huggingface/chat-ui.git
cd chat-ui
initialize database:
docker run -d -p 27017:27017 --name mongo-chatui mongo:latest
create file .env.local with content:
| MONGODB_URL=mongodb://localhost:27017
USE_LOCAL_WEBSEARCH=true
MODELS=`[
{
"name": "sauerkrautlm-una-solar-instruct.Q3_K_S",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}### User:\n{{content}}\n\n{{/ifUser}}{{#ifAssistant}}### Assistant:\n{{content}}\n\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 1.0,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": []
},
"endpoints": [
{
"type": "ollama",
"url" : "http://127.0.0.1:8001",
"ollamaName" : "sauerkrautlm-una-solar-instruct.Q3_K_S"
}
]
}
]`
|
edit web query examples
edit file ~/chat-ui/src/lib/server/websearch/generateQuery.ts, add some nsfw examples to const convQuery array, eg:
{ from: "user", content: "search hentai" },
{ from: "assistant", content: `hentai girls` },
{ from: "user", content: "search sexy girls" },
{ from: "assistant", content: `sexy girls` },
replace:
const webQuery = ...
with:
let webQuery = await generateFromDefaultEndpoint({
...
});
const match = webQuery.replaceAll("\n", "").match(/^.{0,5}Assistant:(.*)/i)
webQuery = match && match[1].trim() || webQuery
download and install model
wget https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF/resolve/main/sauerkrautlm-una-solar-instruct.Q3_K_S.gguf?download=true
create file Modelfile with contents:
| FROM sauerkrautlm-una-solar-instruct.Q3_K_S.gguf
TEMPLATE """
### User:
{{ .Prompt }}
### Assistant:
"""
|
ollama create sauerkrautlm-una-solar-instruct.Q3_K_S -f Modelfile
create proxy to change requests to ollama
create file node_proxy.mjs with following contents:
| import http from 'http'
const model = "sauerkrautlm-una-solar-instruct.Q3_K_S"
const port = 8001
const targetPort = 11434
const server = http.createServer((req, res) => {
if (req.method === 'POST') {
let body = ''
req.on('data', chunk => {
body += chunk.toString()
})
req.on('end', () => {
const data = JSON.parse(body)
data.model = model
console.log("replaced / model: " + model)
body = JSON.stringify(data)
console.log("replaced request", body)
const bodyBuffer = Buffer.from(body)
const targetUrl = `http://localhost:${targetPort}${req.url}`
const options = {
"hostname": 'localhost',
"port": targetPort,
"path": req.url,
"method": req.method,
"Content-Length": bodyBuffer.length
}
const targetReq = http.request(options, (targetRes) => {
targetRes.on('data', chunk => {
console.log(chunk.toString())
res.write(chunk)
})
targetRes.on('end', () => res.end())
})
targetReq.on("error", (e) => {
console.log("err", e)
})
targetReq.write(bodyBuffer)
req.pipe(targetReq)
})
}
})
server.listen(port, () => {
console.log('Listening for requests on port ' + port);
});
|
run
node node_proxy.mjs
cd ~/chat-ui
npm run dev
if database daemon is not running, you need to do something like:
id=$(docker ps -a | tail -n 1 | grep -oP "^\S+")
docker start $id*