Charlotte's custom fork of the Dendrite matrix homeserver
Find a file
Richard van der Hoff 3dd30858d1 Update the install instructions (#255)
* Update the install instructions

To include some of the newer components.

* update INSTALL diagram

client_data goes to sync-api-server, not room-server.
2017-09-25 11:20:36 +01:00
hooks Add contexts to device database (#233) 2017-09-18 15:51:26 +01:00
jenkins use matching sytest branch, or develop 2017-09-20 12:08:24 +01:00
src/github.com/matrix-org/dendrite Implement version endpoint (#262) 2017-09-25 11:16:47 +01:00
vendor Implement shared secret registration (#257) 2017-09-22 16:13:19 +01:00
.editorconfig Add .editorconfig (#179) 2017-08-05 02:25:40 +01:00
.gitignore Make media repo use error rather than jsonErrorResponse (#235) 2017-09-19 11:40:21 +01:00
.travis.yml Use gometalinter (#210) 2017-09-05 17:40:46 +01:00
dendrite-config.yaml Update the install instructions (#255) 2017-09-25 11:20:36 +01:00
INSTALL.md Update the install instructions (#255) 2017-09-25 11:20:36 +01:00
LICENSE Add Apache Version 2.0 license and headers to all golang files 2017-04-21 00:40:52 +02:00
linter-fast.json Add goconst linter (#246) 2017-09-20 15:25:25 +01:00
linter.json Add goconst linter (#246) 2017-09-20 15:25:25 +01:00
README.md Replace the 'TODO' list with link to spreasheet (#247) 2017-09-21 12:33:21 +01:00
travis-install-kafka.sh Keep track of membership in Client API (#159) 2017-07-17 18:10:56 +01:00
travis-test.sh Foundation for media API testing (#136) 2017-06-08 15:40:51 +02:00
WIRING.md Add misspell and gofmt simplify to the pre-commit hooks (#138) 2017-06-12 18:30:47 +01:00

Dendrite Build Status

Dendrite will be a matrix homeserver written in go.

Install

Dendrite is still very much a work in progress, but those wishing to work on it may be interested in the installation instructions in INSTALL.md.

Design

Log Based Architecture

Decomposition and Decoupling

A matrix homeserver can be built around append-only event logs built from the messages, receipts, presence, typing notifications, device messages and other events sent by users on the homeservers or by other homeservers.

The server would then decompose into two categories: writers that add new entries to the logs and readers that read those entries.

The event logs then serve to decouple the two components, the writers and readers need only agree on the format of the entries in the event log. This format could be largely derived from the wire format of the events used in the client and federation protocols:

 C-S API   +---------+    Event Log    +---------+   C-S API
---------> |         |+  (e.g. kafka)  |         |+ --------->
           | Writers || =============> | Readers ||
---------> |         ||                |         || --------->
 S-S API   +---------+|                +---------+|   S-S API
            +---------+                 +---------+

However the way matrix handles state events in a room creates a few complications for this model.

  1. Writers require the room state at an event to check if it is allowed.
  2. Readers require the room state at an event to determine the users and servers that are allowed to see the event.
  3. A client can query the current state of the room from a reader.

The writers and readers cannot extract the necessary information directly from the event logs because it would take too long to extract the information as the state is built up by collecting individual state events from the event history.

The writers and readers therefore need access to something that stores copies of the event state in a form that can be efficiently queried. One possibility would be for the readers and writers to maintain copies of the current state in local databases. A second possibility would be to add a dedicated component that maintained the state of the room and exposed an API that the readers and writers could query to get the state. The second has the advantage that the state is calculated and stored in a single location.

 C-S API   +---------+    Log   +--------+   Log   +---------+   C-S API
---------> |         |+ ======> |        | ======> |         |+ --------->
           | Writers ||         |  Room  |         | Readers ||
---------> |         || <------ | Server | ------> |         || --------->
 S-S API   +---------+|  Query  |        |  Query  +---------+|  S-S API
            +---------+         +--------+          +---------+

The room server can annotate the events it logs to the readers with room state so that the readers can avoid querying the room server unnecessarily.

This architecture can be extended to cover most of the APIs.

How things are supposed to work.

Local client sends an event in an existing room.

  1. The client sends a PUT /_matrix/client/r0/rooms/{roomId}/send request and an HTTP loadbalancer routes the request to a ClientAPI.

  2. The ClientAPI:

* Authenticates the local user using the `access_token` sent in the HTTP
  request.
* Checks if it has already processed or is processing a request with the
  same `txnID`.
* Calculates which state events are needed to auth the request.
* Queries the necessary state events and the latest events in the room
  from the RoomServer.
* Confirms that the room exists and checks whether the event is allowed by
  the auth checks.
* Builds and signs the events.
* Writes the event to a "InputRoomEvent" kafka topic.
* Send a `200 OK` response to the client.
  1. The RoomServer reads the event from "InputRoomEvent" kafka topic:
* Checks if it has already has a copy of the event.
* Checks if the event is allowed by the auth checks using the auth events
  at the event.
* Calculates the room state at the event.
* Works out what the latest events in the room after processing this event
  are.
* Calculate how the changes in the latest events affect the current state
  of the room.
* TODO: Workout what events determine the visibility of this event to other
  users
* Writes the event along with the changes in current state to an
  "OutputRoomEvent" kafka topic. It writes all the events for a room to
  the same kafka partition.

3a) The ClientSync reads the event from the "OutputRoomEvent" kafka topic:

* Updates its copy of the current state for the room.
* Works out which users need to be notified about the event.
* Wakes up any pending `/_matrix/client/r0/sync` requests for those users.
* Adds the event to the recent timeline events for the room.

3b) The FederationSender reads the event from the "OutputRoomEvent" kafka topic:

* Updates its copy of the current state for the room.
* Works out which remote servers need to be notified about the event.
* Sends a `/_matrix/federation/v1/send` request to those servers.
* Or if there is a request in progress then add the event to a queue to be
  sent when the previous request finishes.

Remote server sends an event in an existing room.

  1. The remote server sends a PUT /_matrix/federation/v1/send request and an HTTP loadbalancer routes the request to a FederationReceiver.

  2. The FederationReceiver:

* Authenticates the remote server using the "X-Matrix" authorisation header.
* Checks if it has already processed or is processing a request with the
  same `txnID`.
* Checks the signatures for the events.
  Fetches the ed25519 keys for the event senders if necessary.
* Queries the RoomServer for a copy of the state of the room at each event.
* If the RoomServer doesn't know the state of the room at an event then
  query the state of the room at the event from the remote server using
  `GET /_matrix/federation/v1/state_ids` falling back to
  `GET /_matrix/federation/v1/state` if necessary.
* Once the state at each event is known check whether the events are
  allowed by the auth checks against the state at each event.
* For each event that is allowed write the event to the "InputRoomEvent"
  kafka topic.
* Send a 200 OK response to the remote server listing which events were
  successfully processed and which events failed
  1. The RoomServer processes the event the same as it would a local event.

3a) The ClientSync processes the event the same as it would a local event.

TODO

There's plenty still to do to make Dendrite usable! We're tracking progress in a spreadsheet.