* Add tests for device list updates
* Add stale_device_lists table and use db before asking remote for device keys
* Fetch remote keys if all devices are requested
* Add display_name col to store remote device names
Few other tweaks to make `Server correctly handles incoming m.device_list_update`
pass.
* Fix sqlite otk bug
* Unbuffered channel to block /send causing sytest to not race anymore
* Linting and fix bug whereby we didn't send updated dl tokens to the client causing a tightloop on /sync sometimes
* No longer assert staleness as Update blocks on workers now
* Back out tweaks
* Bugfixes
* Add InputDeviceListUpdate
* Unbreak unit tests
* Process inbound device list updates from federation
- Persist the keys in the keyserver and produce key changes
- Does not currently fetch keys from the remote server if the prev IDs are missing
* Linting
* Add QueryDeviceMessages to serve up device keys and stream IDs
* Consume key change events in fedsender
Don't yet send them to destinations as we haven't worked them out yet
* Send device list updates to all required servers
* Glue it all together
* Fix New users appear in /keys/changes
* Create blank device keys when logging in on a new device
* Add PerformDeviceUpdate and fix a few bugs
- Correct device deletion query on sqlite
- Return no keys on /keys/query rather than an empty key
* Unbreak sqlite properly
* Use a real DB for currentstateserver integration tests
* Race fix
* Recheck device lists when join/leave events come in
* Add PerformDeviceDeletion
* Notify clients when devices are deleted
* Unbreak things
* Remove debug logging
* Add support for logs in StreamingToken
Tokens now end up looking like `s11_22|dl-0-123|ab-0-12224`
where `dl` and `ab` are log names, `0` is the partition and
`123` and `12224` are the offsets.
* Also test reserialisation
* s/|/./g so tokens url escape nicely
* User directory
* Fix syncapi unit test
* Make user directory only show remote users you know about from your joined rooms
* Update sytest-whitelist
* Review comments
* Produce kafka events when keys are added
* Consume key changes in syncapi with TODO markers for handling them and catching up
* unbreak tests
* Linting
* Use TransactionWriter on other component SQLites
* Fix sync API tests
* Fix panic in media API
* Fix a couple of transactions
* Fix wrong query, add some logging output
* Add debug logging into StoreEvent
* Adjust InsertRoomNID
* Update logging
* Deduplicate FS database, add some EDU persistence groundwork
* Extend TransactionWriter to use optional existing transaction, use that for FS SQLite database writes
* Fix build due to bad keyserver import
* Working EDU persistence
* gocyclo, unsurprisingly
* Remove unused
* Update copyright notices
* Add a bit more logging to the fedsender
* bugfix: continue sending PDUs if ones are added whilst sending another PDU
Without this, the queue goes back to sleep on `<-oq.notifyPDUs` which won't
fire because `pendingPDUs` is already > 0. This should fix a flakey sytest.
* Break if no txn is sent
* WIP syncapi work
* More debugging
* Bump GMSL version to pull in working Event.Redact
* Remove logging
* Make redactions work on v3+
* Fix more tests
* Move filter table to syncapi where it is used
* Implement /sync `limited` and read timeline limit from stored filters
We now fully handle `room.timeline.limit` filters (in-line + stored) and
return the right value for `limited` syncs.
* Update whitelist
* Default to the default timeline limit if it's unset, also strip the extra event correctly
* Update whitelist
* Join room body is optional
* Support deprecated login by user/password
* Implement dummy key upload endpoint
* Make a very determinate end to /messages if we hit the create event in back-pagination
* Linting
* Make userapi responsible for checking access tokens
There's still plenty of dependencies on account/device DBs, but this
is a start. This is a breaking change as it adds a required config
value `listen.user_api`.
* Cleanup
* Review comments and test fix
This is a wrapper around whatever impl we have which then logs
the function name/request/response/error.
Also tweak when we log on kafka streams: only log on the producer
side not the consumer side: we've never had issues with comms and
having 1 message rather than N would be nice.
* s/QueryBackfill/PerformBackfill/g
* OutputEvent now includes AddStateEvents which contain the full event of extra state events
* Only include adds not the current event
* Get adding state right
Previously we had 3 monoliths:
- dendrite-monolith-server
- dendrite-demo-libp2p
- dendritejs
which all had their own of setting up public routes. Factor this
out into a new `setup.Monolith` struct which gets all dependencies
set as fields. This is different to `basecomponent.Base` which
doesn't provide any way to set configured deps (e.g public rooms db)
Part of a larger process to clean up how we initialise Dendrite.
* Split out adding HTTP routes from making internal APIs for clarity
* Split out more components
* Split out more things
* Finish converting
* internal mux for internal routes
* Groundwork for send-to-device messaging
* Update sample config
* Add unstable routing for now
* Send to device consumer in sync API
* Start the send-to-device consumer
* fix indentation in dendrite-config.yaml
* Create send-to-device database tables, other tweaks
* Add some logic for send-to-device messages, add them into sync stream
* Handle incoming send-to-device messages, count them with EDU stream pos
* Undo changes to test
* pq.Array
* Fix sync
* Logging
* Fix a couple of transaction things, fix client API
* Add send-to-device test, hopefully fix bugs
* Comments
* Refactor a bit
* Fix schema
* Fix queries
* Debug logging
* Fix storing and retrieving of send-to-device messages
* Try to avoid database locks
* Update sync position
* Use latest sync position
* Jiggle about sync a bit
* Fix tests
* Break out the retrieval from the update/delete behaviour
* Comments
* nolint on getResponseWithPDUsForCompleteSync
* Try to line up sync tokens again
* Implement wildcard
* Add all send-to-device tests to whitelist, what could possibly go wrong?
* Only care about wildcard when targeted locally
* Deduplicate transactions
* Handle tokens properly, return immediately if waiting send-to-device messages
* Fix sync
* Update sytest-whitelist
* Fix copyright notice (need to do more of this)
* Comments, copyrights
* Return errors from Do, fix dendritejs
* Review comments
* Comments
* Constructor for TransactionWriter
* defletions
* Update gomatrixserverlib, sytest-blacklist
* Per-user-per-device sync streams
* Tweaks
* Tweaks
* Pass full device into CompleteSync
* Set user IDs and device IDs properly in tests
* Add new test, fix TestNewEventAndWasPreviouslyJoinedToRoom
* nolint a function that is not used yet
* Add test for waking up single device
* Hopefully unstick test
* Try to ensure that TestCorrectStreamWakeup doesn't block forever
* Update tests
* Separate muxes for public and internal APIs
* Update client-api-proxy and federation-api-proxy so they don't add /api to the path
* Tidy up
* Consistent HTTP setup
* Set up prefixes properly
* sytest: Make 'Inbound federation can backfill events' pass
This breaks 'Outbound federation can backfill events' because now
we are returning the right number of events, which the previous
test was relying on.
Previously, /messages was backfilling the membership event, causing
the test to pass. Now we are no longer backfilling the membership
event due to the change in this commit, causing the test to fail.
The test should instead be returning the membership event locally
from synacpis database, but it doesn't do it fast enough, resulting
in a no-op /sync response with a next_batch=s0_0 which will never
pick up the local membership event when it rolls in. The test
does attempt to retry, but doesn't take the new next_batch=s1_0
resulting in it missing from the /messages response.
* Linting
* Refactor all postgres tables; start work on sqlite
* wip sqlite merges; database is locked errors to investigate and failing tests
* Revert "wip sqlite merges; database is locked errors to investigate and failing tests"
This reverts commit 26cbfc5b75ae2dc4fb31a838b917aa39d758f162.
* convert current room state table
* port over sqlite topology table
* remove a few functions
* remove more functions
* Share more code
* factor out completesync and a bit more
* Remove remaining code
* Initial syncapi storage refactor to share pq/sqlite code
This goes down a different route than https://github.com/matrix-org/dendrite/pull/985
which tried to even reduce the boilerplate of `ExecContext` etc. The previous pattern
fails badly when there are subtle differences in parameters and hence the shared
boilerplate to read from `QueryContext` breaks. Rather than attacking it at that level,
the main place where we want to reuse code is for the `syncserver.go` itself - the
database implementation which has lots of complex logic. So instead, this commit:
- Makes `invites_table.go` an interface.
- Makes `SyncServerDatasource` use that interface
- This means some functions are now identical for pq/sqlite, so factor them out
to a temporary `shared.Database` struct which will grow until it replaces all of
`SyncServerDatasource`.
* Missing files
* syncapi: Rename and split out tokens
Previously we used the badly named `PaginationToken` which was
used for both `/sync` and `/messages` requests. This quickly
became confusing because named fields like `PDUPosition` meant
different things depending on the token type. Instead, we now have
two token types: `TopologyToken` and `StreamingToken`, both of
which have fields which make more sense for their specific situations.
Updated the codebase to use one or the other. `PaginationToken` still
lives on as `syncToken`, an unexported type which both tokens rely on.
This allows us to guarantee that the specific mappings of positions
to a string remain solely under the control of the `types` package.
This enables us to move high-level conceptual things like
"decrement this topological token" to function calls e.g
`TopologicalToken.Decrement()`.
Currently broken because `/messages` seemingly used both stream and
topological tokens, though I need to confirm this.
* final tweaks/hacks
* spurious logging
* Review comments and linting
* Fix ordering when backfilling
The problem was that we weren't sorting the returned events
by depth when sending them back to the caller, instead we
were sorting by prev_events which is not the same thing.
* Fixup tests