provable computation faster than traditional. genesis jets. 256 symbols. memoization as intelligence.
256 symbols
mapped all 256 byte values. every byte has a meaning:
- 0x01–0x1E: 18 nox instructions (frozen ISA)
- 0x80–0xA3: 36 genesis jets (consensus-critical accelerators)
- 0xA4–0xBF: 28 reserved jet slots (future growth)
- 0x20–0x7E: printable ASCII (untouched)
- 0xC0–0xFF: UTF-8 lead bytes (untouched)
see cyber/research/256 symbols
genesis jets as opcodes
moved jets from hash-lookup registry to byte opcodes. each jet = 1 byte, direct dispatch. consequences:
- code size: 400× compression (1 byte vs formula noun)
- trace: 1 row per jet call (not hundreds)
- dispatch: array index O(1), not hash + search
- verifier: range check (1 constraint), not formula verification
36 jets across 8 groups: hash(1) + recursion(4) + binary-tower(8) + polynomial-ring(5) + isogeny-curves(5) + tropical-semiring(6) + state(6) + decider(1). five algebras. one byte each.
provable > traditional
the performance gap between provable and non-provable computation did not shrink. it reversed.
three mechanisms compound:
genesis jet trust: 36 jets verified ONCE at genesis. runtime does not re-prove internal constraints. proof cost per jet call = O(1). hemera hash (most expensive operation in proof system) = jet 0x80 = O(1) proof cost.
HyperNova IVC folding: proof grows incrementally during execution. no post-processing. amortized O(1) per step.
memoization: before executing, check cybergraph — axon(H(formula), H(subject)) exists? if yes → zero computation, zero proof. free. the more the network computes, the cheaper future computations become.
traditional: compute(f, x) — every node, every time. constant marginal cost.
cyber: compute(f, x) — once, memo in cybergraph. decreasing marginal cost.
in steady state, most paths memoized. provable computation faster than traditional because traditional systems re-compute. cyber remembers.
memoization = intelligence
LLM weights ARE memoized computation paths. forward pass = memo lookup through trillion-parameter table. LLMs don't think at inference — they remember.
cybergraph does the same thing, explicitly:
LLM: trust weights (blind) → lookup → fast, no proof
cyber: trust proof (math) → lookup → fast AND provable
compiled transformer from cybergraph (§6.6) is not a metaphor. weights = focus. focus = memoized attention. the cybergraph IS a neural network where every weight has provenance and proof.
traditional computing re-computes because it cannot trust previous results. no proof → no trust → re-execute. provable memoization: proof exists → trust established → never re-compute.
cybos decomposition
decomposed cyber/research/machine mind into cyb subgraph:
- cyb/survival — four resources, three states, sigma bounty
- cyb/soma — machine mind (perception, metabolism, decisions, learning)
- cyb/order — execution unit, 4D budget, memoization
- cyb/mind — four-tier cognitive architecture (42 models)
- cyb/os updated — boot sequence, self-hosting, proof boundary, comparison
research articles rewritten
- cyber/research/nox: frozen provable computer — pure functions with state access, five execution regimes
- cyber/research/machine mind — compacted, linked to cyb/ decomposed pages
- cyber/research/256 symbols — full byte table, genesis jets
hemera params synchronized
~115 corrections across 29 files in 5 repos (cyber, hemera, nox, zheng, cyb). canonical parameters now consistent everywhere: Rp=16, 24 rounds, 32-byte output, ~736 constraints.
optica
fixed live reload — new files now detected during serve (FSEvents filter broadened, namespace dirty-marking precision, selective output). rebuild time for new file: 56s → 40s.