* Move filter table to syncapi where it is used
* Implement /sync `limited` and read timeline limit from stored filters
We now fully handle `room.timeline.limit` filters (in-line + stored) and
return the right value for `limited` syncs.
* Update whitelist
* Default to the default timeline limit if it's unset, also strip the extra event correctly
* Update whitelist
* Groundwork for send-to-device messaging
* Update sample config
* Add unstable routing for now
* Send to device consumer in sync API
* Start the send-to-device consumer
* fix indentation in dendrite-config.yaml
* Create send-to-device database tables, other tweaks
* Add some logic for send-to-device messages, add them into sync stream
* Handle incoming send-to-device messages, count them with EDU stream pos
* Undo changes to test
* pq.Array
* Fix sync
* Logging
* Fix a couple of transaction things, fix client API
* Add send-to-device test, hopefully fix bugs
* Comments
* Refactor a bit
* Fix schema
* Fix queries
* Debug logging
* Fix storing and retrieving of send-to-device messages
* Try to avoid database locks
* Update sync position
* Use latest sync position
* Jiggle about sync a bit
* Fix tests
* Break out the retrieval from the update/delete behaviour
* Comments
* nolint on getResponseWithPDUsForCompleteSync
* Try to line up sync tokens again
* Implement wildcard
* Add all send-to-device tests to whitelist, what could possibly go wrong?
* Only care about wildcard when targeted locally
* Deduplicate transactions
* Handle tokens properly, return immediately if waiting send-to-device messages
* Fix sync
* Update sytest-whitelist
* Fix copyright notice (need to do more of this)
* Comments, copyrights
* Return errors from Do, fix dendritejs
* Review comments
* Comments
* Constructor for TransactionWriter
* defletions
* Update gomatrixserverlib, sytest-blacklist
* Refactor all postgres tables; start work on sqlite
* wip sqlite merges; database is locked errors to investigate and failing tests
* Revert "wip sqlite merges; database is locked errors to investigate and failing tests"
This reverts commit 26cbfc5b75ae2dc4fb31a838b917aa39d758f162.
* convert current room state table
* port over sqlite topology table
* remove a few functions
* remove more functions
* Share more code
* factor out completesync and a bit more
* Remove remaining code
* Initial syncapi storage refactor to share pq/sqlite code
This goes down a different route than https://github.com/matrix-org/dendrite/pull/985
which tried to even reduce the boilerplate of `ExecContext` etc. The previous pattern
fails badly when there are subtle differences in parameters and hence the shared
boilerplate to read from `QueryContext` breaks. Rather than attacking it at that level,
the main place where we want to reuse code is for the `syncserver.go` itself - the
database implementation which has lots of complex logic. So instead, this commit:
- Makes `invites_table.go` an interface.
- Makes `SyncServerDatasource` use that interface
- This means some functions are now identical for pq/sqlite, so factor them out
to a temporary `shared.Database` struct which will grow until it replaces all of
`SyncServerDatasource`.
* Missing files
* syncapi: Rename and split out tokens
Previously we used the badly named `PaginationToken` which was
used for both `/sync` and `/messages` requests. This quickly
became confusing because named fields like `PDUPosition` meant
different things depending on the token type. Instead, we now have
two token types: `TopologyToken` and `StreamingToken`, both of
which have fields which make more sense for their specific situations.
Updated the codebase to use one or the other. `PaginationToken` still
lives on as `syncToken`, an unexported type which both tokens rely on.
This allows us to guarantee that the specific mappings of positions
to a string remain solely under the control of the `types` package.
This enables us to move high-level conceptual things like
"decrement this topological token" to function calls e.g
`TopologicalToken.Decrement()`.
Currently broken because `/messages` seemingly used both stream and
topological tokens, though I need to confirm this.
* final tweaks/hacks
* spurious logging
* Review comments and linting
* Fix ordering when backfilling
The problem was that we weren't sorting the returned events
by depth when sending them back to the caller, instead we
were sorting by prev_events which is not the same thing.
* Fixup tests
* Limit database connections (#564)
- Add new options to the config file database:
max_open_conns: 100
max_idle_conns: 2
conn_max_lifetime: -1
- Implement connection parameter setup on the *DB (database/sql) in internal/sqlutil/trace.go:Open()
- Propagate the values in the form of DbProperties interface via all the
Open() and NewDatabase() functions
Signed-off-by: Tomas Jirka <tomas.jirka@email.cz>
* Fix wasm builds
* Remove file accidentally added from working tree
Co-authored-by: Tomas Jirka <tomas.jirka@email.cz>
* Correctly generate backpagination tokens for events which have the same depth
With tests. Unfortunately the code around here is hard to understand.
There will be a subsequent PR which fixes this up now that we have a test
harness in place.
* Add postgres impl
* More linting
* Fix psql statement so it actually works
* sql/backwards_extremities: Shift to table format and share code
This is an initial cut to reduce boilerplate at the storage layer.
It removes the need for 2x `_table.go` files, one for each DB engine,
replacing it with a single struct which has an interface which
implements the raw SQL statements.
The actual impl sits alongside the interface declaration which is
generally regarded as best practice (though no canonical sources).
Especially in this case where the impl is tiny (functions returning
strings) and relies heavily on the function signatures of the
table struct (for parameters), having the context in the same file
is useful.
* Remove _table redundancy
* Update gomatixserverlib
* Try to build invite stripped state if not given to us
* SendInvite improvements
* Transpose invite_room_state into invite_state.events for sync API
* Remove syncapi debugging output
* Use RespInviteV2
* Update gomatrixserverlib
* Send the invite event as a normal roomserver event too, for incorporating into room (should this be done by the roomserver automatically for invite inputs?)
* Federation sender use invite_room_state, room server try to insert membership state
* Check supported room versions on the invite endpoint
* Prevent roomserver query API from trying to handle requests for stub rooms
* Adding a nolint
* Replace IsRoomStub with RoomNIDExcludingStubs, fix query API to use that instead
* Review comments
* Implement history visibility checks for /backfill
Required for p2p to show history correctly.
* Add sytest
* Logging
* Fix two backfill bugs which prevented backfill from working correctly
- When receiving backfill requests, do not send the event that was in the original request.
- When storing backfill results, correctly update the backwards extremity for the room.
* hack: make backfill work multiple times
* add sqlite impl and remove logging
* Linting
* Use HeaderedEvent in syncapi
* Update notifier test
* Fix persisting headered event
* Clean up unused API function
* Fix overshadowed err from linter
* Write headered JSON to invites table too
* Rename event_json to headered_event_json in syncapi database schemae
* Fix invites_table queries
* Update QueryRoomVersionCapabilitiesResponse comment
* Fix syncapi SQLite
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
* bugfix: fix panic on new invite events from sytest
I'm unsure why the previous code didn't work, but it's
clearer, quicker and easier to read the `LastInsertID()` way.
Previously, the code would panic as the SELECT would fail
to find the last inserted row ID.
* sqlite: Fix UNIQUE violations and close more cursors
- Add missing `defer rows.Close()`
- Do not have the state block NID as a PRIMARY KEY else it breaks for blocks
with >1 state event in them. Instead, rejig the queries so we can still
have monotonically increasing integers without using AUTOINCREMENT (which
mandates PRIMARY KEY).
* sqlite: Add missing variadic function
* Use LastInsertId because empirically it works over the SELECT form (though I don't know why that is)
* sqlite: Fix invite table by using the global stream pos rather than one specific to invites
If we don't use the global, clients don't get notified about any invites
because the position is too low.
* linting: shadowing
* sqlite: do not use last rowid, we already know the stream pos!
* sqlite: Fix account data table in syncapi by commiting insert txns!
* sqlite: Fix failing federation invite
Was failing with 'database is locked' due to multiple write txns
being taken out.
* sqlite: Ensure we return exactly the number of events found in the database
Previously we would return exactly the number of *requested* events, which
meant that several zero-initialised events would bubble through the system,
failing at JSON serialisation time.
* sqlite: let's just ignore the problem for now....
* linting
* Move current work into single branch
* Initial massaging of clientapi etc (not working yet)
* Interfaces for accounts/devices databases
* Duplicate postgres package for sqlite3 (no changes made to it yet)
* Some keydb, accountdb, devicedb, common partition fixes, some more syncapi tweaking
* Fix accounts DB, device DB
* Update naffka dependency for SQLite
* Naffka SQLite
* Update naffka to latest master
* SQLite support for federationsender
* Mostly not-bad support for SQLite in syncapi (although there are problems where lots of events get classed incorrectly as backward extremities, probably because of IN/ANY clauses that are badly supported)
* Update Dockerfile -> Go 1.13.7, add build-base (as gcc and friends are needed for SQLite)
* Implement GET endpoints for account_data in clientapi
* Nuke filtering for now...
* Revert "Implement GET endpoints for account_data in clientapi"
This reverts commit 4d80dff4583d278620d9b3ed437e9fcd8d4674ee.
* Implement GET endpoints for account_data in clientapi (#861)
* Implement GET endpoints for account_data in clientapi
* Fix accountDB parameter
* Remove fmt.Println
* Fix insertAccountData SQLite query
* Fix accountDB storage interfaces
* Add empty push rules into account data on account creation (#862)
* Put SaveAccountData into the right function this time
* Not sure if roomserver is better or worse now
* sqlite work
* Allow empty last sent ID for the first event
* sqlite: room creation works
* Support sending messages
* Nuke fmt.println
* Move QueryVariadic etc into common, other device fixes
* Fix some linter issues
* Fix bugs
* Fix some linting errors
* Fix errcheck lint errors
* Make naffka use postgres as fallback, fix couple of compile errors
* What on earth happened to the /rooms/{roomID}/send/{eventType} routing
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>