|
|
|
@ -17,18 +17,11 @@ Therefore, the "filesystem" code will just be a library that's simple a low-leve
|
|
|
|
|
|
|
|
|
|
## Performance
|
|
|
|
|
I believe that this format should be fairly fast, but only implementation and testing will tell for sure.
|
|
|
|
|
Throughput is the main concern here, rather than latency. We can be asynchronous as wait for many requests to finish, rather than worrying about when they finish. This is also better for **SSD** performance.
|
|
|
|
|
1. Minimal data needs to read in - bit offsets can be used, and only fixed-size metadata must be known
|
|
|
|
|
2. `serde` is fairly optimized for deserialization/serialization
|
|
|
|
|
3. `BTreeMap` is a very fast and simple data structure
|
|
|
|
|
4. Async and multithreading will allow for concurrent access, and splitting of resource-intensive tasks across threads.
|
|
|
|
|
3. `HighwayHash` is a very fast and well-optimized hashing algorithm
|
|
|
|
|
4. Async and multithreading will allow for concurrent access, and splitting of resource-intensive tasks across threads
|
|
|
|
|
5. `hashbrown` is quite high-performance
|
|
|
|
|
6. Batch processing increases throughput
|
|
|
|
|
|
|
|
|
|
### Buffering
|
|
|
|
|
The `kernel` will hold two read/write buffers in-memory and will queue reading & writing operations into them.
|
|
|
|
|
They can then be organized and batch processed, in order to optimize **HDD** speed (not having to move the head around), and **SSD** performance (minimizing operations).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Filesystem Layout
|
|
|
|
|
|
|
|
|
@ -36,7 +29,7 @@ They can then be organized and batch processed, in order to optimize **HDD** spe
|
|
|
|
|
|------|------|--------|
|
|
|
|
|
| Boot Sector | `128 B` | `None` |
|
|
|
|
|
| Kernel Sector | `4096 KB` | `None` |
|
|
|
|
|
| Index Sector | `u64` | `PartitionHeader` |
|
|
|
|
|
| Index Sector | `4096 KB` | `None` |
|
|
|
|
|
| Config Sector | `u64` | `PartitionHeader` |
|
|
|
|
|
| User Sector(s) | `u64` | `PartitionHeader` |
|
|
|
|
|
|
|
|
|
@ -44,31 +37,13 @@ They can then be organized and batch processed, in order to optimize **HDD** spe
|
|
|
|
|
A virtual section of the disk.
|
|
|
|
|
Additionally, it has a **UUID** generated via [lolid](https://lib.rs/crates/lolid) to enable identifying a specific partition.
|
|
|
|
|
|
|
|
|
|
[binary-layout](https://lib.rs/crates/binary-layout) can be used to parse data from raw bytes on the disk into a structured format, with `no-std`.
|
|
|
|
|
|
|
|
|
|
```rust
|
|
|
|
|
use binary_layout::prelude::*;
|
|
|
|
|
const LABEL_SIZE: u16 = 128; // Example number of characters that can be used in the partition label
|
|
|
|
|
|
|
|
|
|
define_layout!(partition_header, BigEndian, {
|
|
|
|
|
partition_type: PartitionType, // Which type of partition it is
|
|
|
|
|
struct PartitionHeader {
|
|
|
|
|
label: [char; LABEL_SIZE], // Human-readable label. Not UTF-8 though :/
|
|
|
|
|
num_chunks: u64, // Chunks in this partition
|
|
|
|
|
uuid: Uuid
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
enum PartitionType {
|
|
|
|
|
Index, // Used for FS indexing
|
|
|
|
|
Config, // Used for system configuration
|
|
|
|
|
User, // User-defined partition
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
fn parse_data(partition_data: &mut [u8]) -> View {
|
|
|
|
|
let mut view = partition_header::View::new(partition_data);
|
|
|
|
|
|
|
|
|
|
let id: u64 = view.uuid().read(); // Read some data
|
|
|
|
|
view.num_chunks_mut().write(10); // Write data
|
|
|
|
|
|
|
|
|
|
return view;
|
|
|
|
|
uuid: Uuid,4096
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
@ -76,36 +51,39 @@ fn parse_data(partition_data: &mut [u8]) -> View {
|
|
|
|
|
Small pieces that each partition is split into.
|
|
|
|
|
Contains fixed-length metadata (checksum, encryption flag, modification date, etc.) at the beginning, and then arbitrary data afterwards.
|
|
|
|
|
|
|
|
|
|
`binary-layout` is similarly used to parse the raw bytes of a chunk.
|
|
|
|
|
|
|
|
|
|
```rust
|
|
|
|
|
use binary_layout::prelude::*;
|
|
|
|
|
const CHUNK_SIZE: u64 = 4096; // Example static chunk size (in bytes)
|
|
|
|
|
const CHUNK_SIZE: u64 = 4096; // Example static chunk size
|
|
|
|
|
|
|
|
|
|
define_layout!(chunk, BigEndian, {
|
|
|
|
|
struct ChunkHeader {
|
|
|
|
|
checksum: u64,
|
|
|
|
|
encrypted: bool,
|
|
|
|
|
modified: u64, // Timestamp of last modified
|
|
|
|
|
uuid: u128,
|
|
|
|
|
uuid: Uuid,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct Chunk {
|
|
|
|
|
header: ChunkHeader,
|
|
|
|
|
data: [u8; CHUNK_SIZE],
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
This struct is then encoded into bytes and written to the disk. Drivers for the disk are *to be implemented*.
|
|
|
|
|
It *should* be possible to do autodetection, and maybe for *Actors* to specify which disk/partition they want to be saved to.
|
|
|
|
|
|
|
|
|
|
**AES** encryption can be used, and this allows for only specific chunks to be encrypted.[^encryption]
|
|
|
|
|
Compression of the data should also be possible, due to `bincode` supporting [flate2](https://lib.rs/crates/flate2) compression.
|
|
|
|
|
Similarly **AES** encryption can be used, and this allows for only specific chunks to be encrypted.[^encryption]
|
|
|
|
|
|
|
|
|
|
### Reading
|
|
|
|
|
On boot, we start executing code from the **Boot Sector**. This contains the assembly instructions, which then jump to the `kernel` code in the **Kernel Sector**.
|
|
|
|
|
The `kernel` then reads in bytes from the first partition *(as the sectors are fixed-size, we know when this starts)* into memory, parsing it into a structured form.
|
|
|
|
|
The `kernel` then reads in bytes from the first partition *(as the sectors are fixed-size, we know when this starts)* into memory, serializing it into a `PartitionHeader` struct via [bincode](https://lib.rs/crates/bincode).
|
|
|
|
|
|
|
|
|
|
From here, as we have a fixed `CHUNK_SIZE`, and know how many chunks are in our first partition, we can read from any chunk on any partition now.
|
|
|
|
|
On startup, an *Actor* can request to read data from the disk. If it has the right [capabilities](/development/design/actor.md#ocap), we find the chunk it's looking for from the index, parse the data, and send it back.
|
|
|
|
|
On startup, an *Actor* can request to read data from the disk. If it has the right [capabilities](/development/design/actor.md#ocap), we find the chunk it's looking for from the index, parse the data (using `bincode` again), and send it back.
|
|
|
|
|
|
|
|
|
|
Also, we are able to verify data. Before passing off the data, we re-hash it using [HighwayHash](https://lib.rs/crates/highway) to see if it matches.
|
|
|
|
|
If it does, we simply pass it along like normal. If not, we refuse, and send an error [message](/development/design/actor.md#messages).
|
|
|
|
|
|
|
|
|
|
### Writing
|
|
|
|
|
Writing uses a similar process. An *Actor* can request to write data. If it has proper capabilties, we serialize the data, allocate a free chunk, and write to it.
|
|
|
|
|
Writing uses a similar process. An *Actor* can request to write data. If it has proper capabilties, we serialize the data, allocate a free chunk[^free_chunk], and write to it.
|
|
|
|
|
We *hash* the data first to generate a checksum, and set proper metadata.
|
|
|
|
|
|
|
|
|
|
### Permissions
|
|
|
|
@ -118,37 +96,31 @@ will be determined via [capabilities](/development/design/actor.md#ocap)
|
|
|
|
|
|
|
|
|
|
### Indexing
|
|
|
|
|
Created in-memory on startup, modified directly whenever the filesystem is modified.
|
|
|
|
|
It's saved in the *Index Sector* (which is at a known offset), allowing it to be read in easily on boot.
|
|
|
|
|
It's saved in the *Index Sector* (which is at a known offset & size), allowing it to be read in easily on boot.
|
|
|
|
|
It again simply uses `bincode` and compression.
|
|
|
|
|
|
|
|
|
|
The index is simply an `alloc::` [BTreeMap](https://doc.rust-lang.org/stable/alloc/collections/btree_map/struct.BTreeMap.html).
|
|
|
|
|
|
|
|
|
|
We also have a simple `Vec` of the chunks that are free, which we modify in reverse.
|
|
|
|
|
While the index is not necessarily a fixed size, we read until we have enough data from the fixed sector size.
|
|
|
|
|
|
|
|
|
|
```rust
|
|
|
|
|
let mut index = BTreeMap::new(); // Basic Actor index
|
|
|
|
|
let mut free_index = Vec<u64>; // Index of free chunks
|
|
|
|
|
use hashbrown::HashMap;
|
|
|
|
|
|
|
|
|
|
let mut index = HashMap::new(); // Create the Uuid storage index
|
|
|
|
|
let mut free_index = HashMap::new(); // Create the freespace index
|
|
|
|
|
|
|
|
|
|
struct Location {
|
|
|
|
|
partition: Uuid, // Partition identified via Uuid
|
|
|
|
|
chunks: Vec<u64>, // Which chunk(s) in the partition it is
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
let new_data = (Uuid::new(), b"data"); // Test data w/ an actor Uuid & bytes
|
|
|
|
|
let new_data_location = Location {
|
|
|
|
|
partition: Uuid::new(),
|
|
|
|
|
chunks: vec![5, 8], // 5th & 8th chunk in that partition
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
index.entry(&actor.uuid).or_insert(&new_data_location); // Insert an Actor's storage location if it's not already stored
|
|
|
|
|
for i in &new_data_location.chunks {
|
|
|
|
|
free_index.pop(&i); // Remove used chunks from the free chunks list
|
|
|
|
|
}
|
|
|
|
|
index.insert(&new_data.0, &new_data_location); // Insert a new entry mapping a data Uuid to a location
|
|
|
|
|
|
|
|
|
|
index.contains_key(&actor.uuid); // Check if the index contains an Actor's data
|
|
|
|
|
index.get(&actor.uuid); // Get the Location of the actor
|
|
|
|
|
index.remove(&actor.uuid); // Remove an Actor's data from the index (e.g. on deletion)
|
|
|
|
|
for i in &new_data_location.chunks {
|
|
|
|
|
free_index.push(&i); // Add back the now free chunks
|
|
|
|
|
}
|
|
|
|
|
let uuid_location = index.get(&new_data.0).unwrap(); // Get the location of a Uuid
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
This then allows the index to be searched easily to find the data location of a specific `Uuid`.
|
|
|
|
@ -160,7 +132,6 @@ It also allows us to tell if an actor *hasn't* been saved yet, allowing us to kn
|
|
|
|
|
- Isolation
|
|
|
|
|
- Journaling
|
|
|
|
|
- Resizing
|
|
|
|
|
- Atomic Operations
|
|
|
|
|
|
|
|
|
|
## Executable Format
|
|
|
|
|
Programs written in userspace will need to follow a specific format.
|
|
|
|
@ -189,3 +160,5 @@ struct PackedExecutable {
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
[^encryption]: Specific details to be figured out later
|
|
|
|
|
|
|
|
|
|
[^free_chunk]: Need to figure out how to efficiently do this. **XFS** seems to just keep another index of free chunks. It also uses a **B+Tree** rather than a hashmap - to look into.
|
|
|
|
|