* Initial work on persistent queues
* Update index for event ID and server name
* Put things into database (postgres for now)
* Duplicate postgres code into sqlite for now just to stop build errors, will fix SQLite soon
* Fix table name
* Fix index
* Fix table name
* Use RETURNING because LastInsertID is not supported by postgres
* Use functions
* Marshal headered event
* Don't error on now rows
* Don't block if there are PDUs waiting
* Try to tidy up JSON
* Debug logging
* Fix query, use transactions in postgres
* Clean up
* Rehydrate more opportunistically
* Fix SQLite
* remove unused types
* Review comments
* Shuffle things around a bit
* Clean up transaction properly
* Don't send empty transactions
* Reduce unnecessary retries
* Count PDUs to make more resilient
* Don't stop when there is work to be done
* Try to limit wakeups
* well this is tedious
* Fix race in incomplete transactions
* Thread safety on transaction ID/count
* Remove membership table from account DB
And make code which needs that data use the currentstate server
* Unbreak tests; use a membership enum for space
* Add a new component: currentstateserver
- Add a skeleton for it, with databases and a single query method.
- Add integration tests for it.
- Add listen/address fields in the config (breaking as this will force people to specify this to validate)
Not currently hooked up to anything yet.
* Unbreak config tests
* Add current_state to sample config
* comments
* Move filter table to syncapi where it is used
* Implement /sync `limited` and read timeline limit from stored filters
We now fully handle `room.timeline.limit` filters (in-line + stored) and
return the right value for `limited` syncs.
* Update whitelist
* Default to the default timeline limit if it's unset, also strip the extra event correctly
* Update whitelist
* Return remote errors from FS.PerformJoin
Follows the same pattern as PerformJoin on roomserver (no error return).
Also return the right format for incompatible room version errors.
Makes a bunch of tests pass!
* Handle network errors better when returning remote HTTP errors
* Linting
* Fix tests
* Update whitelist, pass network errors through in API=1 mode
* Add PerformInvite and refactor how errors get handled
- Rename `JoinError` to `PerformError`
- Remove `error` from the API function signature entirely. This forces
errors to be bundled into `PerformError` which makes it easier for callers
to detect and handle errors. On network errors, HTTP clients will make a
`PerformError`.
* Unbreak everything; thanks Go!
* Send back JSONResponse according to the PerformError
* Update federation invite code too
* Pass join errors through internal API boundaries
Required for certain invite sytests. We will need to think of a
better way of handling this going forwards.
* Include m.room.avatar in stripped state; handle trailing slashes when GETing state events
* Update whitelist
* Update whitelist
We would return a 403 first (as the server would not be allowed to
see this event) and only then return a 404 if the event is not in
the given room. We now invert those checks for /state and /state_ids
to make the tests pass.
* Join room body is optional
* Support deprecated login by user/password
* Implement dummy key upload endpoint
* Make a very determinate end to /messages if we hit the create event in back-pagination
* Linting
Produces output like:
```
Non-Spec APIs: 0% (0/52 tests)
--------------
Non-Spec API : 0% (0/50 tests)
Unknown API (no group specified): 0% (0/2 tests)
```
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Derive content ID from hash+filename but preserve dedupe, improve Content-Disposition handling and ASCII handling
* Linter fix
* Some more comments
* Update sytest-whitelist
* BREAKING: Make eduserver/appservice use userapi
This is a breaking change because this PR restructures how the AS API
tracks its position in Kafka streams. Previously, it used the account DB
to store partition offsets. However, this is also being used by `clientapi`
for the same purpose, which is bad (each component needs to store offsets
independently or else you might lose messages across restarts). This PR
changes this behaviour to now store partition offsets in the `appservice`
database.
This means that:
- Upon restart, the `appservice` component will attempt to replay all
room events from the beginning of time.
- An additional table will be created in the appservice database, which
in and of itself is backwards compatible.
* Return ErrorConflict