the packer uses these scores to select the most valuable pages for each context budget. highest focus first, greedy knapsack until full
sizes
| file | tokens | target models |
|---|---|---|
8k.md |
8K | local 7B, GPT-3.5 |
32k.md |
32K | GPT-4, local 13-32B |
128k.md |
128K | Claude Haiku, Gemini, GPT-4 Turbo |
200k.md |
200K | Claude Sonnet |
500k.md |
500K | Claude with room for dialogue |
900k.md |
900K | Claude Opus 1M |
1400k.md |
1.4M | 2M context windows, full graph + subgraphs |
usage
drop any size file into your model's system prompt or context window:
# claude code
cat 200k.md | claude --system-prompt -
# api
{"system": "<contents of 128k.md>", "messages": [...]}
# local llm
llama-cli -m model.gguf --system-prompt-file 32k.md
build
requires cyber repo with analizer/context.nu
nu build.nu # all sizes
nu build.nu --sizes [128 500] # specific sizes
nu build.nu --cyber-path ~/my/cyber # custom path
why this context wins
-
self-describing: the knowledge graph contains its own theory of knowledge, attention, and relevance. a model reading it understands what it is reading and why
-
mathematically ranked: pages are selected by the same tri-kernel algorithm the protocol uses for consensus. not curated by hand — ranked by graph structure
-
compositional: every page uses wiki-links. the model sees the full link topology and can reason about relationships between concepts
-
dense: the graph covers mathematics, cryptography, game theory, information theory, biology, economics — all unified under one protocol. maximum knowledge per token
-
recursive: the context describes the process that generated the context. the model can verify and improve the ranking
license
Don't trust. Don't fear. Don't beg.