* Refactor registration to align with the spec
* We now keep track of sessions and their completed registration stages.
* We only complete registration if the client has completed a full flow.
* New Derived section in config for data derived from config options.
* New config options for captcha.
* Send params back to client for each registration stage.
Signed-off-by: Andrew Morgan (https://amorgan.xyz) <andrew@amorgan.xyz>
* Bump gomatrixserverlib
Mostly because I want to use Erik's go-faster jsoning.
* Update KeyDB for new KeyFetcher API
we now need to implement FetcherName.
* Attempt to fix integ tests
CanonicalJSON doesn't like the empty string, apparently, and anyway
canonicalising it is pointless.
* More integ test fix
* Implement query to get state and auth chain
* Add routing for queryStateAndAuthChain
* Comments
* Fix fetching wrong set of events
* Add tests
* Shuffle and comment
* Fix /sync when we have no events
We used a since token of 0 to mean that no token was given. However, naffka
streams start at 0. This causes clients to get stuck spinning forever until an
event is sent.
This changes it so that we pass around pointers instead, with nil meaning a
since token wasn't given.
* Comment
* Fix unit tests
* Comments
* Fix typo
* Add room alias query endpoint
* Try to fix indentation problems
* Fix linting errors and use of httpReq.FormValue
Signed-off-by: Ross Schulman <ross@rbs.io>
* Run gofmt
* Check for empty alias parameter and fix route URL
Signed-off-by: Ross Schulman <ross@rbs.io>
* Fix some linting errors
Signed-off-by: Ross Schulman <ross@rbs.io>
* Delete extra copy of directory route
* bump gomatrixserverlib
(changes to KeyFetcher and KeyDatabase interfaces)
* Store keys rather than json in the keydatabase
Rather than storing the raw JSON returned from a /keys/v1/query call in the
table, store the key itself.
This makes keydb.Database implement the updated KeyDatabase interface.
We should probably move the handling out from the syncapi, but that
requires the clientapi to stream the current state which it currently
doesn't. This at least stops the sync and state handling being done in
one file.
The pre-commit hook took 45 seconds to run on my machine, which was more than
enough time for me to get distracted by a swordfight in the corridor.
Let's just run the linters (which still take 6 seconds). It's not the place of
a commit hook to run every test we can think of - that is what CI is for.
We should no longer be building anything into vendor/bin in the scripts,
so adding it to the path can lead to all sorts of confusion if old
binaries are there.
* Update gometalinter config
gometalinter now uses `maligned` instead of `aligncheck`
(https://github.com/alecthomas/gometalinter/pull/367), so we need to update our
config accordingly.
* Update gometalinter
* Disable gotype linter
gotype does not seem to play nicely with the gb vendor directory. In
particular, it wants each of our dependencies to be built and installed (see
https://github.com/golang/go/issues/10969), but (empirically) it will not
accept them being installed in `pkg` but insists on them being in `vendor/pkg`.
This presents a problem because `gb build` builds the packages into `pkg`
(which doesn't seem entirely unreasonable since `.` comes before `vendor` in
`$GOPATH`). `go install github.com/x/y` does install in `vendor/pkg` but
requires us to know the name of each package.
The general conclusion of https://github.com/alecthomas/gometalinter/issues/91
seems to have been that the easiest thing to do is to disable `gotype` for now.
* Fix `unparam` lint
* Fix goshadow lint
The motivation for this is to make it easier to see whether a travis failure is due to linting, unit tests or integration test failures, without having to look in the logs.
It also means that each job is independent, so if e.g. the linting fails then the unit tests will still be run.
This has two benefits:
1. Using channels makes it easier to time out while waiting
2. Allows us to clean up goroutines that were waiting if we timeout the
request
The way we store the partition offsets for kafka streams means that when
we start after a crash we may get the last message we processed again.
This means that we have to be careful to ensure that the processing
handles consecutive duplicates correctly.