Throughput is the main concern here, rather than latency. We can be asynchronous as wait for many requests to finish, rather than worrying about when they finish. This is also better for **SSD** performance.
The `kernel` will hold two read/write buffers in-memory and will queue reading & writing operations into them.
They can then be organized and batch processed, in order to optimize **HDD** speed (not having to move the head around), and **SSD** performance (minimizing operations).
On boot, we start executing code from the **Boot Sector**. This contains the assembly instructions, which then jump to the `kernel` code in the **Kernel Sector**.
The `kernel` then reads in bytes from the first partition *(as the sectors are fixed-size, we know when this starts)* into memory, parsing it into a structured form.
On startup, an *Actor* can request to read data from the disk. If it has the right [capabilities](/development/design/actor.md#ocap), we find the chunk it's looking for from the index, parse the data, and send it back.
Writing uses a similar process. An *Actor* can request to write data. If it has proper capabilties, we serialize the data, allocate a free chunk, and write to it.
The index is simply an `alloc::` [BTreeMap](https://doc.rust-lang.org/stable/alloc/collections/btree_map/struct.BTreeMap.html). *(If not, try [scapegoat](https://lib.rs/crates/scapegoat))*.
This then allows the index to be searched easily to find the data location of a specific `Uuid`.
Whenever an actor makes a request to save data to it's `Uuid` location, this can be easily found.
It also allows us to tell if an actor *hasn't* been saved yet, allowing us to know whether we need to allocate new space for writing, or if there's actually something to read.
Programs written in userspace will need to follow a specific format.
First, users will write a program in **Rust**, using the **Mercury** libraries, and with `no-std`.
They'll use [Actors](/development/design/actor.md) to communicate with the `kernel`.
Then, they'll compile it for the proper platform and get a pure binary.
This will be ran through an *executable packer* program, and the output of which can be downloaded by the package manager, put on disk, etc.
It'll then parsed in via `bincode`, then the core is ran by the `kernel` in userspace.
Additionally, the raw bytes will be compressed.
Then, whether reading from [chunks](#chunk) from memory or disk, we can know whether it will run on the current system, how long to read for, and when the compressed bytes start (due to the fixed length header).
It is then simple to decompress the raw bytes and run them from the `kernel`.