* Add libp2p-go
* Some tweaks, tidying up
(cherry picked from commit 1a5bb121f8121c4f68a27abbf25a9a35a1b7c63e)
* Move p2p dockerfile
(cherry picked from commit 8d3bf44ea1bf37f950034e73bcdc315afdabe79a)
* Remove containsBackwardsExtremity
* Fix some linter errors, update some libp2p packages/calls, other tidying up
* Add -port for dendrite-p2p-demo
* Use instance name as key ID
* Remove P2P demo docker stuff, no longer needed now that we have SQLite
* Remove Dockerfile-p2p too
* Remove p2p logic from dendrite-monolith-server
* Inject publicRoomsDB in publicroomsapi
Inject publicRoomsDB instead of switching on base.libP2P.
See: https://github.com/matrix-org/dendrite/pull/956/files?file-filters%5B%5D=.go#r406276914
* Fix lint warning
* Extract mDNSListener from base.go
* Extract CreateFederationClient into demo
* Create P2PDendrite from BaseDendrite
Extract logic specific to P2PDendrite from base.go
* Set base.go to upstream/master
* Move pubsub to demo cmd
* Move PostgreswithDHT to cmd
* Remove unstable features
* Add copyrights
* Move libp2pvalidator into p2pdendrite
* Rename dendrite-p2p-demo -> dendrite-demo-libp2p
* Update copyrights
* go mod tidy
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Check if user has the power level to edit the room visibility
* fix review changes
* Add nil check for queryEventsRes.StateEvents
Co-authored-by: Alex Chen <Cnly@users.noreply.github.com>
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Create and glue ExternalPublicRoomsProvider into the public rooms component
This is how we will link p2p stuff to dendrite proper.
* Use gmsl structs rather than our own
* Implement federated public rooms
- Make thirdparty endpoint r0 so riot-web loads the public room list
* Typo
* Missing callsites
* Implement gomatrixserverlib.HeaderedEvent, which should allow us to store room version headers along with the event across API boundaries and consumers/producers, and intercept unmarshalling to get the event structure right
* Add federationsender to previous
* Use a fork of pq which supports userCurrent on wasm
* Use sqlite3_js driver when running in JS
* Add cmd/dendritejs to pull in sqlite3_js driver for wasm only
* Update to latest go-sqlite-js version
* Replace prometheus with a stub. sigh
* Hard-code a config and don't use opentracing
* Latest go-sqlite3-js version
* Generate a key for now
* Listen for fetch traffic rather than HTTP
* Latest hacks for js
* libp2p support
* More libp2p
* Fork gjson to allow us to enforce auth checks as before
Previously, all events would come down redacted because the hash
checks would fail. They would fail because sjson.DeleteBytes didn't
remove keys not used for hashing. This didn't work because of a build
tag which included a file which no-oped the index returned.
See https://github.com/tidwall/gjson/issues/157
When it's resolved, let's go back to mainline.
* Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157
* Use latest gomatrixserverlib for sig checks
* Fix a bug which could cause exclude_from_sync to not be set
Caused when sending events over federation.
* Use query variadic to make lookups actually work!
* Latest gomatrixserverlib
* Add notes on getting p2p up and running
Partly so I don't forget myself!
* refactor: Move p2p specific stuff to cmd/dendritejs
This is important or else the normal build of dendrite will fail
because the p2p libraries depend on syscall/js which doesn't work
on normal builds.
Also, clean up main.go to read a bit better.
* Update ho-http-js-libp2p to return errors from RoundTrip
* Add an LRU cache around the key DB
We actually need this for P2P because otherwise we can *segfault*
with things like: "runtime: unexpected return pc for runtime.handleEvent"
where the event is a `syscall/js` event, caused by spamming sql.js
caused by "Checking event signatures for 14 events of room state" which
hammers the key DB repeatedly in quick succession.
Using a cache fixes this, though the underlying cause is probably a bug
in the version of Go I'm on (1.13.7)
* breaking: Add Tracing.Enabled to toggle whether we do opentracing
Defaults to false, which is why this is a breaking change. We need
this flag because WASM builds cannot do opentracing.
* Start adding conditional builds for wasm to handle lib/pq
The general idea here is to have the wasm build have a `NewXXXDatabase`
that doesn't import any postgres package and hence we never import
`lib/pq`, which doesn't work under WASM (undefined `userCurrent`).
* Remove lib/pq for wasm for syncapi
* Add conditional building to remaining storage APIs
* Update build script to set env vars correctly for dendritejs
* sqlite bug fixes
* Docs
* Add a no-op main for dendritejs when not building under wasm
* Use the real prometheus, even for WASM
Instead, the dendrite-sw.js must mock out `process.pid` and
`fs.stat` - which must invoke the callback with an error (e.g `EINVAL`)
in order for it to work:
```
global.process = {
pid: 1,
};
global.fs.stat = function(path, cb) {
cb({
code: "EINVAL",
});
}
```
* Linting
* SQLite support for appservice
* SQLite support for mediaapi
* Copyright notices
* SQLite for public rooms API (although with some slight differences in behaviour)
* Lazy match aliases, add TODOs
* Move current work into single branch
* Initial massaging of clientapi etc (not working yet)
* Interfaces for accounts/devices databases
* Duplicate postgres package for sqlite3 (no changes made to it yet)
* Some keydb, accountdb, devicedb, common partition fixes, some more syncapi tweaking
* Fix accounts DB, device DB
* Update naffka dependency for SQLite
* Naffka SQLite
* Update naffka to latest master
* SQLite support for federationsender
* Mostly not-bad support for SQLite in syncapi (although there are problems where lots of events get classed incorrectly as backward extremities, probably because of IN/ANY clauses that are badly supported)
* Update Dockerfile -> Go 1.13.7, add build-base (as gcc and friends are needed for SQLite)
* Implement GET endpoints for account_data in clientapi
* Nuke filtering for now...
* Revert "Implement GET endpoints for account_data in clientapi"
This reverts commit 4d80dff4583d278620d9b3ed437e9fcd8d4674ee.
* Implement GET endpoints for account_data in clientapi (#861)
* Implement GET endpoints for account_data in clientapi
* Fix accountDB parameter
* Remove fmt.Println
* Fix insertAccountData SQLite query
* Fix accountDB storage interfaces
* Add empty push rules into account data on account creation (#862)
* Put SaveAccountData into the right function this time
* Not sure if roomserver is better or worse now
* sqlite work
* Allow empty last sent ID for the first event
* sqlite: room creation works
* Support sending messages
* Nuke fmt.println
* Move QueryVariadic etc into common, other device fixes
* Fix some linter issues
* Fix bugs
* Fix some linting errors
* Fix errcheck lint errors
* Make naffka use postgres as fallback, fix couple of compile errors
* What on earth happened to the /rooms/{roomID}/send/{eventType} routing
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Always defer *sql.Rows.Close and consult with Err
database/sql.Rows.Next() makes sure to call Close only after exhausting
result rows which would NOT happen when returning early from a bad Scan.
Close being idempotent makes it a great candidate to get always deferred
regardless of what happens later on the result set.
This change also makes sure call Err() after exhausting Next() and
propagate non-nil results from it as the documentation advises.
Closes#764
Signed-off-by: Kiril Vladimiroff <kiril@vladimiroff.org>
* Override named result parameters in last returns
Signed-off-by: Kiril Vladimiroff <kiril@vladimiroff.org>
* Do the same over new changes that got merged
Signed-off-by: Kiril Vladimiroff <kiril@vladimiroff.org>
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
* Wire up publicroomsapi to roomserver events
* Remove parameter that was incorrectly brought over from p2p work
* nolint containsBackwardExtremity for now
* Fall back to postgres when parsing the database connection string for a URI schema fails
* Fix behaviour so that it really tries postgres when URL parsing fails and it complains about unknown schema if it succeeds
As a response to #638, it seems that POST /publicRooms is already implemented. It is, however, unclear from the code that it is.
Add some comments and change a method name to make this more clear.
We were escaping the URL before performing any pattern matching on it.
This meant that if you sent data that URLdecoded to a "/", it would count as
a "/" in the URL, potentially causing a 404. This was causing some flaky tests
with some randomly-generated query parameters.
Now, we keep URLs encoded while doing the pattern matching, and only afterwards
do we URL decode each query parameter individually before passing them to their
respective handler functions.
github.com/gorilla/mux was also updated to v1.7.3 to fix a bug with URL encoding and subrouters.