llm-py-web/server
Charlotte Som 710a6de7bc load the system prompt on-the-fly instead of once at startup
this lets us modify it [for new conversations] on disk while the llm server
is running
2025-02-26 10:41:59 +00:00
..
__init__.py send and display the conversation name 2025-02-26 10:21:15 +00:00
http.py write a little frontend 2025-02-26 08:10:58 +00:00
inference.py load the system prompt on-the-fly instead of once at startup 2025-02-26 10:41:59 +00:00
tid.py get inference over websocket working 2025-02-26 04:20:28 +00:00
Internal server error - lavender gitea

500

Internal server error

Forgejo version: 9.0.2+gitea-1.22.0