77e7e0b0cf
dont clear the system prompt actually
...
simonw plugins (e.g. llm-anthropic, llm-gemini) don't send the system
prompt as a {role: system} message in build_messages, so the model will
'forget' its system intsructions if it's not retransmitted every time :/
2025-03-26 20:55:58 +00:00
115c762e57
don't 404 when trying to connect to a non-existent conversation, just create it
2025-03-02 07:58:24 +00:00
3a73e21c5b
display the current model in use
2025-03-01 12:50:52 +00:00
ef0cfbd72d
add more code highlight languages & split out hljs
2025-02-27 08:46:14 +00:00
baedcbaab7
display system prompt
2025-02-27 07:54:53 +00:00
323802e740
use user-supplied model id
2025-02-26 16:13:31 +00:00
710a6de7bc
load the system prompt on-the-fly instead of once at startup
...
this lets us modify it [for new conversations] on disk while the llm server
is running
2025-02-26 10:41:59 +00:00
e45fde5178
send and display the conversation name
2025-02-26 10:21:15 +00:00
b8f7ce6ad7
send real id if conversation_id is 'new'
2025-02-26 08:52:43 +00:00
b0b5facced
brevity
2025-02-26 08:48:30 +00:00
a2367f303e
write a little frontend
2025-02-26 08:10:58 +00:00
2ef699600e
idk formatting
...
im just procrastinating because i dont think i want to start making the
client
2025-02-26 04:36:06 +00:00
2e828600ea
skip sending previous context if we're continuing a connection
2025-02-26 04:33:57 +00:00
deb2f6e8e5
get inference over websocket working
2025-02-26 04:20:28 +00:00
75c5c2db63
initial commit
2025-02-25 17:39:29 +00:00