update the readme so it's clearer for current state of relay
This commit is contained in:
parent
35558748fa
commit
3b774e65eb
1 changed files with 11 additions and 8 deletions
19
README.md
19
README.md
|
@ -1,19 +1,22 @@
|
|||
# cerulea-relay
|
||||
|
||||
Realtime relay (1hr backfill window) for PDSes with fewer than 1000 repos.
|
||||
Realtime non-archival relay for third-party AT Proto PDSes.
|
||||
|
||||
The idea is that we can have much larger limits if we scale down the volume of the network.
|
||||
In the interest of cost control, we are scaling down the network:
|
||||
- Only PDSes with fewer than 1000 repos are crawled
|
||||
- We do no backfilling, only current events are relayed to consumers
|
||||
- Stale data (≈ 24hrs?) is purged from the database [not doing this yet]
|
||||
|
||||
The idea is that we can have apps with much larger limits if we scale down the volume of the network.
|
||||
- Large block sizes
|
||||
- Large record size limit
|
||||
- therefore: Large text field in post records, large uploads
|
||||
- etcetcetc
|
||||
|
||||
## todo
|
||||
|
||||
- metrics / tracing / observability shit
|
||||
- keep track of currently subscribed PDSes in application state
|
||||
- discover PDSes via crawling instead of hardcoding 1 (lol)
|
||||
- do not allow PDSes with more than 1000 repos
|
||||
- history:
|
||||
- timestamp instead of seq number as key ?
|
||||
- purge based on ttl
|
||||
- takedowns probably
|
||||
- store indexedAt values
|
||||
- purge based on ttl
|
||||
- takedowns
|
||||
|
|
Loading…
Reference in a new issue