CybOS

the operating system built on the cyb/stack. no Unix legacy — native abstractions for agents, cyberlinks, ranks, epochs, bandwidth. zero unsafe Rust. bounded liveness everywhere. the cyb/core proof pipeline runs inside this kernel.

design axioms

  1. no files, no processes, no users, no fork/exec, no POSIX. cyb abstractions are native to its domain
  2. zero unsafe Rust. the entire OS — kernel, drivers, consensus, storage — compiles without a single unsafe block. memory safety is a compiler-verified property
  3. bounded liveness. no operation can block indefinitely. no module can starve another. every async future has a compile-time deadline. the system degrades gracefully, never halts
  4. neural drivers. hardware support generated by models against stable trait contracts, verified by the compiler, validated by conformance test suites
  5. single address space. no user/kernel split. no syscalls. no TLB flushes. isolation enforced by Rust ownership, not hardware privilege levels

layered design

┌──────────────────────────────────────────────────────┐
│                      CybOS                            │
│  ┌────────────────────────────────────────────────┐  │
│  │              Application Cells                  │  │
│  │  Consensus · Graph · Rank · Bandwidth · Query   │  │
│  │  (100% safe Rust, hot-swappable via governance) │  │
│  ├────────────────────────────────────────────────┤  │
│  │           Async Bounded Runtime                 │  │
│  │  Epoch budget allocator · Wait-free channels    │  │
│  │  Heartbeat monitor · Degraded mode manager      │  │
│  ├────────────────────────────────────────────────┤  │
│  │              HAL Trait Layer                     │  │
│  │  BlockDevice · NetDevice · Iommu · IRQ · Timer  │  │
│  │  (~3K lines, the entire hardware contract)      │  │
│  ├────────────────────────────────────────────────┤  │
│  │           MMIO Foundation                       │  │
│  │  Compiler-integrated register access            │  │
│  │  Zero unsafe — MMIO as language primitive        │  │
│  ├────────────────────────────────────────────────┤  │
│  │        Neural Driver Harnesses                  │  │
│  │  model-generated, compiler-verified per-platform  │  │
│  └────────────────────────────────────────────────┘  │
│                       │                              │
│                  ┌────┴────┐                          │
│                  │Hardware │                          │
│                  └─────────┘                          │
└──────────────────────────────────────────────────────┘

cells — not processes

cells replace processes: independently compiled Rust crates that can be loaded, unloaded, and hot-swapped at runtime without stopping the system. each cell has explicit dependency declarations, typed bounded wait-free channels, exclusive state ownership, mandatory heartbeat reporting. cell lifecycle is governed by on-chain governance.

missing cell system behavior
Rank validates blocks, does not answer rank queries
Consensus becomes full node (follows chain, does not vote)
Query participates in consensus, does not serve clients
Gossip works with local state only (island mode)
Storage emergency halt, preserves last state

no file system — the Big Badass Graph

no hierarchical file system. no paths, no inodes, no directories. all persistent data lives in bbg — a content-addressed knowledge graph that subsumes every storage layer. the graph is not a feature of the protocol — the graph IS the protocol.

three primitives: particles (content-addressed nodes — identity = hemera hash), cyberlinks (signed 7-tuple edges), neurons (agents who link — identity = hash of public key). the cybergraph $\mathbb{G} = (P, N, L)$ satisfies six axioms: content-addressing (A1), authentication (A2), append-only growth (A3), entry by linking (A4), focus conservation (A5), homoiconicity (A6). see cybergraph

every cyberlink is simultaneously a learning act and an economic commitment. conviction $(\tau, a)$ is a UTXO: creating a link moves tokens from wallet to edge. cheap talk produces noise. costly links produce knowledge.

the tru reads the graph every block and computes cyberank per particle, karma per neuron, syntropy of the whole — the KL divergence of focus from uniform. the tri-kernel integrates three operators: diffusion, springs, heat. convergence guaranteed by the collective focus theorem.

the bbg maintains six NMT indexes over the same data:

index namespace proves
by_neuron neuron_id all edges created by a neuron
by_particle particle_hash all edges touching a particle
focus neuron_id current focus value per neuron
balance neuron_id current balance per neuron
coins denom_hash fungible token supply
cards card_id non-fungible knowledge assets

the graph serves as infrastructure for itself:

function how
identity hemera hash = address, graph = PKI
key exchange CSIDH curves as particles, non-interactive
consensus finalized subgraph IS the canonical state
fork choice $\pi$ from graph topology
finality $\pi_i > \tau$, threshold adapts to graph density
incentives $\Delta\pi$ from convergence = reward signal
proof archive stark proofs published as particles
version control patches = cyberlinks, repos = subgraphs
file system ~neuron/path resolves through cyberlinks
data availability NMT per row, erasure-coded, namespace-aware sampling

no users — the avatar system

identity is a public key (neuron). access control = bandwidth allocation. the cybergraph is public. bandwidth is the only scarce resource.

the cyb/avatar — a collection of neurons under one name. key derivation: m / avatar' / neuron' / particle' / invoice'. all levels hardened. the signer is universal: pluggable signature schemes (ECDSA, Schnorr, BLS), pluggable curves, pluggable derivation paths.

bounded liveness runtime

epoch budget allocator

┌──────────────────────────────────────┐
│         Epoch (e.g., 5 seconds)      │
├──────────┬──────────┬────────────────┤
│Consensus │    TX    │     Rank       │
│ 500ms    │  1500ms  │   remaining    │
│ hard     │  hard    │   soft         │
│ deadline │ deadline │  deadline      │
└──────────┴──────────┴────────────────┘

hard deadline: cell is preempted. soft deadline: cell yields voluntarily.

compile-time deadline enforcement

trait BoundedFuture: Future {
    const MAX_DURATION: Duration;
}

let data = stream.read(&mut buf)
    .with_deadline(Duration::from_millis(100))
    .on_timeout(|| Err(Timeout))
    .await;

the Rust compiler becomes the liveness checker.

wait-free shared state

all inter-cell communication uses wait-free data structures. no mutexes, no locks, no semaphores.

  • knowledge graph reads: wait-free concurrent hash map (atomics-based)
  • transaction mempool: wait-free bounded MPMC queue
  • consensus state: epoch-versioned snapshots (readers never block writers)
  • cyberank results: double-buffered (writers update back buffer, atomic swap to front)

radio — transport layer

radio is the connectivity layer — a fork of iroh where every hash runs through hemera instead of Blake3. one hash function, one address space, zero self-describing overhead. 20× cheaper in stark proofs.

layer what
endpoint QUIC, Ed25519 identity, encrypted streams
relay encrypted fallback, focus-incentivized
hole-punching NAT traversal, STUN/ICE over QUIC
blob + bao verified streaming, hemera Merkle trees
gossip topic pub/sub, epidemic broadcast trees
docs collaborative replicas, set reconciliation
willow confidential sync, Meadowcap access

private messaging

neurons exchange keys non-interactively via CSIDH curves published as particles. onion routing with stark proof chains — each hop proves correct forwarding. see cyber/communication

storage proofs

six proof types ensure graph survival at planetary scale:

proof guarantees
storage content bytes exist on specific node
size claimed size matches actual bytes
replication k ≥ 3 independent copies exist
retrievability content fetchable within bounded time
data availability block data published and accessible
encoding fraud erasure coding done correctly

bandwidth

will is the capacity to create cyberlinks. every link burns will — when it runs out, the neuron falls silent. will regenerates with stake and limits bandwidth.

stake → will regeneration → bandwidth capacity → cyberlink creation → knowledge
  ↑                                                                        |
  └────────────── karma + focus rewards ───────────────────────────────────┘

bandwidth is the only access control mechanism. no passwords, no permissions, no API keys. stake → will → links → knowledge. the economic structure of the cybergraph IS the permission system.

hardware abstraction

three portable formats

processor format what cyb uses it for
CPU WASM (wasmi) logic, layout, events, contracts, state
GPU WGSL (wgpu) pixels, vectors, text, video, ML fallback
NPU ONNX (burn-webnn) SLM inference, AI features
Browser:   WASM (native) + WGSL (WebGPU) + ONNX (WebNN -> NPU)
Desktop:   WASM (wasmi) + WGSL (wgpu -> Vulkan/Metal/DX12) + ONNX (burn)
Mobile:    WASM (wasmi) + WGSL (wgpu -> Metal/GLES) + ONNX (CoreML/NNAPI)

zero-unsafe MMIO

#[mmio_region(base = 0x23B100000, size = 0x100000)]
mod aic {
    register! {
        ENABLE @ 0x010 : ReadWrite<u32> {
            enabled: bool @ 0,
            target_cpu: u4 @ 1..5,
            mode: IrqMode @ 5..7,
        }
    }
}

neural drivers

the HAL is ~3000 lines of Rust trait definitions. drivers generated by models against stable contracts.

platform harness size status
QEMU/virtio ~5K lines reference platform
RISC-V (StarFive) ~10-15K lines open specs
Raspberry Pi 4/5 ~15-20K lines well-documented
Apple M1 ~35-40K lines Asahi knowledge base
x86-64 generic ~20-25K lines standards-based

target: 50+ SoC families. ~1M lines of generated code validated against ~8K lines of traits and tests.

see cyb/stack for the crates this kernel is built from. see cyb/features for the capabilities it provides. see cyb/apps for the applications that run on it

Dimensions

trident/os
🖥️ Operating Systems [← Target Reference](/trident-reference-targets) Designed for 25 OSes. The OS is the runtime — storage, accounts, syscalls, billing. Provable | OS | VM | Runtime binding | Doc | |----|----|-----------------|-----| | Neptune | TRITON | `os.neptune.*` |…
trident/reference/os
🖥️ Operating System Reference [← Target Reference](/trident-reference-targets) | [Standard Library](/trident-reference-stdlib) An OS defines the runtime environment: storage, accounts, syscalls, and I/O conventions. The compiler's job is runtime binding — link against OS-specific modules…

Local Graph