the reward structure that incentivizes neurons to create accurate cyberlinks

reward function

rewards are proportional to $\Delta\pi^*$ — the focus gained by the target particle after a cyberlink is added to the cybergraph. a neuron that links a previously overlooked particle to a relevant context receives the largest reward

$$r(\ell) \;\propto\; \Delta\pi^*(q) \;=\; \pi^*_{t+1}(q) - \pi^*_t(q)$$

where $q$ is the target particle of link $\ell$

discovery premium

early discovery is maximally rewarded. the first neuron to surface a valuable particle captures the largest $\Delta\pi^*$. late consensus-following earns little — when many neurons have already linked a particle, the marginal focus gain shrinks toward zero

this creates a race to discover genuine relevance rather than copy existing links

relation to costly signals

learning incentives and costly signal mechanics work together: the staking cost filters out noise, while the reward function amplifies signal. a neuron must risk real tokens (cost) to earn rewards (incentive), ensuring alignment between economic interest and knowledge production

see cyberlink, focus, neuron, particle, costly signal, relevance machine

Local Graph