|
3a73e21c5b
|
display the current model in use
|
2025-03-01 12:50:52 +00:00 |
|
|
ef0cfbd72d
|
add more code highlight languages & split out hljs
|
2025-02-27 08:46:14 +00:00 |
|
|
baedcbaab7
|
display system prompt
|
2025-02-27 07:54:53 +00:00 |
|
|
323802e740
|
use user-supplied model id
|
2025-02-26 16:13:31 +00:00 |
|
|
710a6de7bc
|
load the system prompt on-the-fly instead of once at startup
this lets us modify it [for new conversations] on disk while the llm server
is running
|
2025-02-26 10:41:59 +00:00 |
|
|
e45fde5178
|
send and display the conversation name
|
2025-02-26 10:21:15 +00:00 |
|
|
b8f7ce6ad7
|
send real id if conversation_id is 'new'
|
2025-02-26 08:52:43 +00:00 |
|
|
b0b5facced
|
brevity
|
2025-02-26 08:48:30 +00:00 |
|
|
a2367f303e
|
write a little frontend
|
2025-02-26 08:10:58 +00:00 |
|
|
2ef699600e
|
idk formatting
im just procrastinating because i dont think i want to start making the
client
|
2025-02-26 04:36:06 +00:00 |
|
|
2e828600ea
|
skip sending previous context if we're continuing a connection
|
2025-02-26 04:33:57 +00:00 |
|
|
deb2f6e8e5
|
get inference over websocket working
|
2025-02-26 04:20:28 +00:00 |
|
|
75c5c2db63
|
initial commit
|
2025-02-25 17:39:29 +00:00 |
|