Reformat chunk design

main
~erin 2023-04-19 12:30:41 -04:00
parent 4cce6065b0
commit 23d98a96d9
Signed by: erin
GPG Key ID: 9A8E308CEFA37A47
2 changed files with 9 additions and 2 deletions

View File

@ -34,7 +34,7 @@ The source for this site, and our [website](https://mercury.the-system.eu.org) i
All `crates`/`libraries` are in a `no-std` environment. This means we only have access to the [libcore](https://doc.rust-lang.org/core/) functionality.
However, we will be using the `alloc` crate to access the heap, and`collections` to have access to data structures like `Vec`.
We should, however, have basic support for [async](https://ferrous-systems.com/blog/stable-async-on-embedded/) and [threading]() in `core::`.
We should, however, have basic support for [async](https://ferrous-systems.com/blog/embedded-concurrency-patterns/) and [threading]() in `core::`.
## Learning
Before jumping in, I highly recommend learning some stuff abotu **Rust** and embedded development with it.

View File

@ -41,11 +41,16 @@ Additionally, it has a **UUID** generated via [lolid](https://lib.rs/crates/loli
```rust
const CHUNK_SIZE: u64; // Example static chunk size
struct Chunk {
struct ChunkHeader {
checksum: u64,
extends: bool,
encrypted: bool,
uuid: Uuid,
modified: u64, // Timestamp of last modified
}
struct Chunk {
header: ChunkHeader,
data: [u8; CHUNK_SIZE],
}
```
@ -65,6 +70,8 @@ On startup, an *Actor* can request to read data from the disk. If it has the rig
Also, we are able to verify data. Before passing off the data, we re-hash it using [HighwayHash](https://lib.rs/crates/highway) to see if it matches.
If it does, we simply pass it along like normal. If not, we refuse, and send an error [message](/development/design/actor.md#messages).
Basically, `part1_offset = BOOT_PARTITION_SIZE`, `part1_data_start = part1_offset + part_header_size`, `chunk1_data_start = part1_data_start + chunk_header_size`.
### Writing
Writing uses a similar process. An *Actor* can request to write data. If it has proper capabilties, we serialize the data, allocate a free chunk[^free_chunk], and write to it.
We *hash* the data first to generate a checksum, and set proper metadata if the data extends past the `CHUNK_SIZE`.