Goldilocks Homomorphic Encryption: FHE Native to Trident
The Construction That Unifies Privacy, Provability, Intelligence, and Quantum Power Over One Field
Part 0: The Discovery
We set out to answer whether Trident could have a structural groundbreak for FHE comparable to what $\mathbb{F}_p$ provides for zero-knowledge proofs, neural networks, and quantum computing. We concluded initially that noise in LWE is fundamental — not an impedance mismatch but a mathematical necessity for security.
That conclusion was correct but incomplete.
The breakthrough is not "remove noise from FHE." The breakthrough is: when TFHE is parameterized over the Goldilocks field, the entire FHE computation — ciphertext arithmetic, NTT, bootstrapping, lookup table evaluation — becomes native trident field arithmetic. The impedance mismatch between FHE and its verification vanishes.
This is not theoretical. Three independent research groups have already validated pieces of this construction:
-
Zama's Thibault & Walter (ACM CCS 2025): Proved TFHE bootstrapping using plonky2, instantiating both the FHE scheme and the SNARK prover over the Goldilocks field $p = 2^{64} - 2^{32} + 1$. First practical verifiable FHE bootstrapping ever demonstrated.
-
FPGA hardware team (arXiv 2025): Built TFHE accelerator using NTT over the Goldilocks prime $q = 2^{64} - 2^{32} + 1$ for polynomial multiplication in programmable bootstrapping. Hardware engineers independently chose Goldilocks for optimal TFHE performance.
-
Packed Sumcheck team (ePrint 2025): Proved TFHE bootstrapping 534× faster than Thibault & Walter, all computations over prime field. Confirmed the viability of prime-field FHE.
-
CRYPTO 2025 (Cascudo et al.): Identified the core problem in verifiable FHE: "HE ciphertexts are typically elements of the ring $R_q$, whereas SNARKs typically work best on computations over large finite fields." This is the impedance mismatch that Goldilocks parameterization eliminates.
No one has unified these results. No one has connected them to neural network inference, quantum computation, or smart contract execution. No one has recognized that this makes FHE a fourth structural pillar — not just an integration story but a genuine groundbreak.
Until now.
Part I: TFHE Over Goldilocks — The Construction
Background: How TFHE Works
TFHE (Torus Fully Homomorphic Encryption) operates on LWE (Learning With Errors) ciphertexts. The core structures:
LWE Ciphertext: $\mathbf{c} = (\mathbf{a}, b) \in \mathbb{Z}_q^{n+1}$ where $b = \langle \mathbf{a}, \mathbf{s} \rangle + m \cdot \Delta + e$
- $\mathbf{s} \in \mathbb{Z}_q^n$: secret key
- $m$: plaintext message
- $\Delta = q/t$: scaling factor ($t$ = plaintext modulus)
- $e$: noise (small, drawn from Gaussian distribution)
Homomorphic Addition: $\mathbf{c}_1 + \mathbf{c}_2$ — vector addition mod $q$. Native, cheap, no noise amplification.
Programmable Bootstrapping (PBS): The core innovation of TFHE. In one operation:
- Refresh the noise (allow unlimited computation depth)
- Evaluate an arbitrary lookup table on the encrypted value
PBS works through blind rotation: an RLWE ciphertext encoding a lookup table polynomial is rotated by the encrypted value using a sequence of CMUX gates (controlled multiplexers). Each CMUX gate involves polynomial multiplication in the ring $R_q = \mathbb{Z}_q[x]/(x^N + 1)$.
The Goldilocks Instantiation
Set $q = p = 2^{64} - 2^{32} + 1$ (the Goldilocks prime).
Now every component becomes Goldilocks field arithmetic:
LWE operations: Vector addition and inner products over $\mathbb{F}_p$. These are exactly the operations in Trident's std.field.core.
NTT for polynomial multiplication: The polynomial ring $R_p = \mathbb{F}_p[x]/(x^N + 1)$ requires $2N$-th roots of unity in $\mathbb{F}_p$. Since $p - 1 = 2^{32}(2^{32} - 1)$, we have $2N | (p-1)$ for $N$ up to $2^{31}$. Standard TFHE uses $N = 1024$ or $2048$. Goldilocks supports NTT for polynomial degrees up to two billion — far beyond any FHE requirement. The NTT is exactly std.field.poly.ntt.
Blind rotation: Each CMUX gate computes an external product between a GGSW ciphertext (encrypting a key bit) and a GLWE ciphertext (the accumulator). This involves:
- Gadget decomposition: expressing ciphertext coefficients in a small base (mod $p$ arithmetic)
- Polynomial multiplication: NTT-based multiplication in $R_p$
- Polynomial addition: coefficient-wise addition mod $p$
Every operation: field arithmetic over $\mathbb{F}_p$.
Lookup table evaluation: The "test polynomial" $v(X) = \sum_{i=0}^{N-1} f(i) \cdot X^i$ encodes the function $f$ being evaluated. Its coefficients are elements of $\mathbb{F}_p$. The blind rotation rotates $v(X)$ by the encrypted value, and sample extraction retrieves the result.
Concrete Parameters
Following Thibault & Walter (CCS 2025), who validated this parameterization:
| Parameter | Value | Notes |
|---|---|---|
| LWE dimension $n$ | 722 | Security parameter |
| RLWE dimension $N$ | 2048 | Polynomial degree |
| Ciphertext modulus $q$ | $2^{64} - 2^{32} + 1$ | Goldilocks prime |
| Plaintext modulus $t$ | Variable (up to ~10 bits) | Message space |
| Noise std. dev. $\sigma$ | $\approx 2^{15}$ | Security vs. correctness |
| Decomposition base $\beta$ | $2^6$ | For gadget decomposition |
| Security level | 128 bits | Against classical + quantum |
Security: LWE with $n = 722$, $q \approx 2^{64}$, $\sigma \approx 2^{15}$ provides approximately 128 bits of security. This is based on standard lattice hardness assumptions (worst-case to average-case reductions from GapSVP/SIVP). Post-quantum secure: best known quantum attacks provide only modest speedup via Grover-enhanced lattice sieving.
Noise budget: In balanced representation, "small" means $|e| < p/2t$. For $p \approx 2^{64}$ and $t = 2^{10}$, the noise threshold is $\approx 2^{54}$. Initial noise $\approx 2^{15}$. After PBS, noise is refreshed to $\approx 2^{15}$ regardless of input noise level. This gives effectively unlimited computation depth.
Part II: The Structural Synergies
Synergy 1: The Impedance Mismatch Disappears
The CRYPTO 2025 paper by Cascudo et al. identifies the fundamental problem:
"HE ciphertexts are typically elements of the ring $R_q$, whereas SNARKs typically work best on computations over large finite fields."
Every existing approach to verifiable FHE must bridge this gap:
- Rinocchio: emulates ring arithmetic in a prime field (massive overhead)
- HELIOPOLIS: builds custom PIOPs for ring operations (complex, scheme-specific)
- Thibault & Walter's breakthrough: set $q$ = Goldilocks prime, use plonky2 (Goldilocks SNARK)
When $q = p$, the ring $R_q = R_p = \mathbb{F}_p[x]/(x^N + 1)$ is a polynomial ring over the STARK's native field. Every FHE operation is already an arithmetic circuit over $\mathbb{F}_p$. No emulation. No translation. No overhead from crossing algebraic boundaries.
For trident specifically: Triton VM executes arithmetic circuits over Goldilocks. TFHE over Goldilocks IS an arithmetic circuit over Goldilocks. The FHE computation and its STARK proof are the same mathematical object viewed from different angles:
FHE view: polynomial multiplication in R_p for blind rotation
STARK view: arithmetic circuit over F_p to be proven
Trident: one program, one compilation, one proof
This is exactly analogous to how std.nn eliminates quantization overhead. EZKL converts float→field (losing accuracy). Standard verifiable FHE converts $R_q$→$\mathbb{F}_p$ (losing efficiency). Goldilocks TFHE in Trident: $R_p$ already IS $\mathbb{F}_p$ polynomials. Zero conversion. Zero overhead.
Synergy 2: Lookup Table Duality — The Deep Insight
This is the most profound structural connection in the entire Trident architecture.
In TFHE: Programmable Bootstrapping evaluates a lookup table (LUT) on encrypted data. The LUT is encoded as a polynomial — the "test polynomial" — whose coefficients represent the function values. PBS blind-rotates this polynomial by the encrypted input, effectively computing $f(\text{Enc}(x)) = \text{Enc}(f(x))$.
In Triton VM / STARK: The lookup argument proves that a claimed function evaluation $y = f(x)$ is correct by checking that the pair $(x, y)$ appears in a precomputed table. The Tip5 hash function's S-box ($x \mapsto x^{p-2}$, the modular inverse) is proven via exactly this mechanism.
In neural networks (std.nn): Activation functions (ReLU, GELU, SiLU) are proven via the same lookup argument — precomputed table of $(x, f(x))$ pairs, authenticated by the STARK.
Three different systems. Three different purposes. One mechanism: lookup table over $\mathbb{F}_p$. See rosetta-stone for the full treatment of the lookup table unification.
| System | Lookup Table Purpose | Mechanism |
|---|---|---|
| TFHE PBS | Evaluate function on encrypted data | Test polynomial blind-rotated by encrypted input |
| STARK | Prove function evaluation correct | Lookup argument in algebraic proof |
| std.nn | Neural network activation | Precomputed table authenticated by STARK |
And here is the key: when all three operate over $\mathbb{F}_p$, the lookup table IS THE SAME OBJECT.
A ReLU activation function can be:
- The test polynomial in TFHE PBS (evaluate ReLU on encrypted data)
- The lookup table in STARK proof (prove ReLU was computed correctly)
- The activation in std.nn (neural network inference)
One function definition. Three uses. Zero redundancy.
This means: a neural network inference that runs on encrypted data (FHE) and is proven correct (STARK) uses the same lookup table for all three purposes. The activation function is simultaneously the FHE bootstrapping function, the STARK verification function, and the neural network nonlinearity.
No other system can achieve this unification because no other system has all three components operating over the same field.
Synergy 3: FHE + STARK Composability
With Goldilocks TFHE, we can compose FHE and STARK proofs natively:
Verifiable FHE (proven externally):
Client encrypts data → Server evaluates FHE circuit →
Server generates STARK proof of correct evaluation →
Client verifies proof + decrypts result
The STARK proof covers the entire FHE evaluation: every NTT, every polynomial multiplication, every CMUX gate, every modulus operation. Proof size: ~200 KB (Thibault & Walter). Verification: <10 ms.
FHE inside STARK (proven internally):
Trident program uses encrypted data as witness →
FHE operations are part of the arithmetic circuit →
STARK proof covers both FHE operations and business logic
Because FHE operations ARE field arithmetic, they need no special treatment in the STARK. They're just more constraints in the same proof.
STARK inside FHE (recursive):
STARK verifier is itself an arithmetic circuit over F_p →
Evaluate STARK verifier homomorphically on encrypted proof →
Prove STARK verification without seeing the proof
This enables private proof verification — verifying a STARK proof without learning what was proven. The STARK verifier is a sequence of hash evaluations (Tip5) and field operations. All native to both FHE and Triton VM.
Synergy 4: NTT Unification
NTT (Number Theoretic Transform) is the workhorse of three separate systems:
| System | NTT Purpose | Ring |
|---|---|---|
| FHE | Polynomial multiplication for CMUX gates | $R_p = \mathbb{F}_p[x]/(x^N+1)$ |
| STARK | Polynomial evaluation for FRI protocol | $\mathbb{F}_p[x]$ |
| Quantum | Quantum Fourier Transform simulation | $\mathbb{F}_{p^2}[x]$ |
All three use NTT over $\mathbb{F}_p$ or its extensions. In Trident, std.field.poly.ntt serves all three. One implementation. One hardware acceleration path. One optimization effort benefits FHE, STARK proving, and quantum simulation simultaneously.
The Goldilocks field prime was designed for fast NTT: $p - 1 = 2^{32}(2^{32} - 1)$ gives $2^{32}$-th roots of unity. The same property that makes STARK proofs fast makes FHE bootstrapping fast makes quantum simulation fast. See gfp-spec for hardware acceleration of these primitives.
Synergy 5: The divine() Bridge
Trident's divine() primitive — non-deterministic witness injection — connects to FHE in a novel way:
FHE key as divine() witness: The FHE secret key is private to the prover. In a STARK proof of FHE computation, the secret key enters via divine(). The proof verifies that the FHE operations were performed correctly without revealing the key. This is exactly how divine() works for ZK proofs generally, but applied to FHE key material.
Decryption result as divine() witness: After homomorphic evaluation, the result is still encrypted. The prover can decrypt (they know the key) and inject the plaintext result via divine(). The constraints verify: (1) the FHE evaluation was correct, (2) the decryption matches the ciphertext, (3) the plaintext result satisfies additional business logic. All in one proof.
Optimization via divine(): FHE computation is expensive. Sometimes it's cheaper to: (1) compute the function on plaintext, (2) inject the result via divine(), (3) prove in ZK that the result is correct. The divine() result serves as the optimization hint, and the STARK proof verifies correctness without requiring the verifier to redo the FHE computation. This is the FHE analog of how divine() accelerates general Trident programs.
Part III: The Four-Pillar Architecture
FHE over Goldilocks is not merely an "integration" with Trident. It exhibits the same structural pattern as the original three pillars:
| Pillar | Impedance Mismatch Eliminated | How $\mathbb{F}_p$ Helps |
|---|---|---|
| ZK | Program → arithmetic circuit | Trident programs ARE circuits over $\mathbb{F}_p$ |
| AI | Float weights → field elements | Weights ARE field elements, no quantization |
| Quantum | Binary → prime dimension | Prime qudits: 1 gate vs. 8000 gates |
| FHE | $R_q$ ciphertext → $\mathbb{F}_p$ proof | When $q = p$: ciphertext arithmetic IS proof arithmetic |
The pattern is identical: each domain has a "natural home" in some algebraic structure, and that structure has an impedance mismatch with proof systems. Trident eliminates each mismatch by making the Goldilocks field the universal medium.
For FHE, the mismatch is between $R_q$ (where ciphertexts live) and $\mathbb{F}_p$ (where proofs live). When $q = p$, $R_q = R_p$ and the mismatch vanishes. This is NOT the same as "just choosing Goldilocks as modulus" — it's recognizing that this choice aligns the entire algebraic stack across FHE, STARK, neural networks, and quantum computation.
Updated stdlib Architecture
┌─────────────────────┐
│ Applications │
│ std.agent │
│ std.defi │
│ std.science │
└──────────┬──────────┘
│
┌─────────────────────────┼─────────────────────────┐
│ │ │ │ │
┌────┴───┐ ┌─────┴────┐ ┌────┴───┐ ┌────┴────┐ ┌─────┴────┐
│nn_fhe │ │nn_quantum│ │nn_priv │ │fhe_quant│ │quant_priv│
│Private │ │Quantum │ │ZK │ │Quantum │ │Quantum │
│ AI │ │ ML │ │ AI │ │ FHE │ │ Crypto │
└────┬───┘ └─────┬────┘ └────┬───┘ └────┬────┘ └─────┬────┘
│ │ │ │ │
┌────┴───┐ ┌─────┴────┐ ┌───┴────┐ ┌────┴────┐ │
│ std.nn │ │std.fhe │ │std.priv│ │std.quant│───────┘
│ (AI) │ │ (FHE) │ │ (ZK) │ │(Quantum)│
└────┬───┘ └─────┬────┘ └───┬────┘ └────┬────┘
│ │ │ │
└────────────┼──────────┼─────────────┘
│ │
┌─────┴──────────┴─────┐
│ Foundation │
│ std.field (F_p) │
│ std.field.poly.ntt │
│ std.crypto (Tip5) │
└──────────────────────┘
std.fhe — The Fourth Pillar
See trident-complete-stdlib for the full std.fhe library specification.
std.fhe
├── lwe LWE operations over F_p
│ ├── encrypt LWE encryption: (a, b = <a,s> + m·Δ + e)
│ ├── decrypt LWE decryption: m = round((b - <a,s>) / Δ)
│ ├── add Homomorphic addition (vector addition mod p)
│ ├── scalar_mul Scalar multiplication
│ ├── key_switch LWE key switching
│ └── mod_switch Modulus switching
│
├── glwe Generalized LWE (polynomial ring)
│ ├── encrypt GLWE encryption in R_p
│ ├── decrypt GLWE decryption
│ ├── external_product GGSW × GLWE → GLWE
│ ├── cmux Controlled multiplexer (core of blind rotation)
│ └── sample_extract GLWE → LWE (extract single coefficient)
│
├── bootstrap Programmable Bootstrapping
│ ├── blind_rotate Core blind rotation (loop of CMUX gates)
│ ├── pbs Full programmable bootstrapping
│ │ ├── standard Standard PBS (single LUT)
│ │ ├── multi_lut PBS evaluating multiple LUTs simultaneously
│ │ └── wop_pbs Without-padding PBS (larger precision)
│ ├── test_polynomial Test polynomial construction
│ │ ├── from_function Build test poly from f: Z_t → Z_t
│ │ ├── relu Pre-built: ReLU test polynomial
│ │ ├── sigmoid Pre-built: sigmoid test polynomial
│ │ ├── sign Pre-built: sign function test polynomial
│ │ ├── identity Pre-built: identity (pure noise refresh)
│ │ └── custom Custom test polynomial from lookup table
│ └── circuit_bootstrap Full circuit bootstrapping (LWE → GGSW)
│
├── key Key management
│ ├── secret_key Secret key generation
│ ├── public_key Public key generation (not always needed)
│ ├── bootstrap_key Bootstrapping key (GGSW encryptions of sk bits)
│ ├── key_switch_key Key switching key
│ └── key_commit Merkle commitment to keys (for on-chain binding)
│
├── arithmetic Homomorphic arithmetic on encrypted integers
│ ├── add Encrypted addition
│ ├── sub Encrypted subtraction
│ ├── mul Encrypted multiplication (via PBS or tensor product)
│ ├── neg Encrypted negation
│ ├── compare Encrypted comparison (>, <, ==) via PBS
│ ├── min_max Encrypted min/max
│ └── bitwise Encrypted bitwise operations (AND, OR, XOR)
│
├── verify Verifiable FHE operations
│ ├── prove_bootstrap STARK proof of correct PBS execution
│ ├── prove_evaluation STARK proof of arbitrary FHE circuit
│ ├── prove_decryption STARK proof of correct decryption
│ ├── prove_key_gen STARK proof of correct key generation
│ └── recursive_verify IVC for iterated FHE operations
│
├── noise Noise management and analysis
│ ├── estimate Noise estimation for given parameters
│ ├── budget Remaining noise budget computation
│ ├── refresh Explicit noise refresh (via PBS with identity)
│ └── param_select Automatic parameter selection for target depth
│
└── compile Compilation targets
├── triton Compile FHE ops to Triton VM (STARK-proven)
├── concrete Export to Zama's Concrete framework
├── tfhe_rs Export to Zama's TFHE-rs library
└── hardware FPGA/ASIC acceleration interface
Key design decisions:
Pre-built test polynomials for neural network activations. std.fhe.bootstrap.test_polynomial.relu provides a test polynomial that computes ReLU on encrypted data. This is the same function as std.nn.activation.relu (computed on plaintext). The lookup table entries are identical. This is the concrete manifestation of the lookup table duality.
Verification is built-in, not bolted on. std.fhe.verify provides STARK proofs of every FHE operation. Because $q = p$, these proofs have zero impedance mismatch. The proof is over the same field as the computation.
Key commitment for on-chain binding. std.fhe.key.key_commit creates a Merkle tree (Poseidon2 / Tip5 hash) over the bootstrapping key. This commitment can be stored on-chain, binding a particular FHE key to a smart contract. Users can verify that a specific key was used for computation without seeing the key.
Part IV: The Intersection Layers
std.nn_fhe — Private Neural Network Inference
The intersection of AI and FHE. Neural networks that run on encrypted data.
std.nn_fhe
├── layer FHE-compatible neural network layers
│ ├── linear_enc Linear layer on encrypted inputs
│ │ (matrix-vector multiply, all in R_p)
│ ├── conv2d_enc Convolution on encrypted inputs
│ └── embedding_enc Embedding lookup on encrypted tokens
│
├── activation_enc Encrypted activation functions via PBS
│ ├── relu_enc ReLU via PBS (test poly from std.fhe)
│ ├── sigmoid_enc Sigmoid via PBS
│ ├── sign_enc Sign function via PBS
│ └── custom_enc Custom activation via PBS
│
├── model_enc Complete encrypted model inference
│ ├── mlp_enc MLP on encrypted data
│ ├── cnn_enc CNN on encrypted data
│ ├── tree_enc Decision tree on encrypted data (via PBS)
│ └── logistic_enc Logistic regression on encrypted data
│
├── hybrid Mixed plaintext/ciphertext computation
│ ├── encrypt_input Encrypt user data for model input
│ ├── decrypt_output Decrypt model output (client-side)
│ ├── model_weights_plain Model weights in plaintext, data encrypted
│ └── model_weights_enc Both model and data encrypted
│
└── prove Verification of encrypted inference
├── prove_inference STARK proof of correct encrypted inference
├── prove_model_match Prove inference used committed model
└── prove_accuracy Prove model achieves claimed accuracy on encrypted test set
The power play: A neural network model committed on-chain (std.nn_private.marketplace.model_commit). User encrypts their data with FHE (std.fhe.lwe.encrypt). Server runs inference on encrypted data (std.nn_fhe.model_enc.mlp_enc). Server generates STARK proof of correct execution (std.nn_fhe.prove.prove_inference). User verifies proof and decrypts result. See privacy-trilateral for the complete ZK+FHE+MPC privacy architecture.
The model owner never sees the data. The data owner never sees the model weights. The STARK proof verifies correct execution. All over one field. No impedance mismatch anywhere in the pipeline.
std.fhe_quantum — Quantum-FHE Intersection
std.fhe_quantum
├── quantum_bootstrap Quantum-accelerated bootstrapping
│ ├── grover_rotation Grover search for optimal blind rotation path
│ └── quantum_ntt Quantum NTT for polynomial multiplication
│
├── qfhe Quantum FHE (future: no-cloning security)
│ ├── quantum_encrypt Quantum state as ciphertext
│ ├── unitary_eval Homomorphic evaluation via unitary gates
│ └── quantum_decrypt Measurement-based decryption
│
└── hybrid Hybrid classical-quantum FHE
├── classical_fhe_quantum_compute
│ Classical encryption, quantum evaluation
└── quantum_verified_classical_fhe
Quantum randomness for FHE parameters
Near-term: Quantum-accelerated NTT for FHE bootstrapping. The NTT is a Fourier transform, and quantum computers offer quadratic speedup for Fourier-related operations. Since the NTT dominates FHE bootstrapping cost, quantum acceleration directly speeds up the most expensive FHE operation.
Long-term: True quantum FHE where security comes from no-cloning rather than noise. When quantum memory matures, this eliminates the noise overhead entirely. Trident's std.quantum provides the infrastructure for quantum state manipulation; std.fhe_quantum.qfhe provides the FHE-specific protocols.
Part V: The PBS-Activation-Lookup Unification — Full Technical Detail
This section develops the deepest technical insight: three systems using one mechanism.
The Mathematical Object
A function table $T_f$ for $f: \{0, 1, \ldots, t-1\} \to \mathbb{F}_p$ is a vector of $t$ field elements:
$$T_f = (f(0), f(1), \ldots, f(t-1)) \in \mathbb{F}_p^t$$
Use 1: TFHE Programmable Bootstrapping
PBS encodes $T_f$ as the test polynomial:
$$v(X) = \sum_{i=0}^{N-1} f\!\bigl(\lfloor i \cdot t / N \rfloor\bigr) \cdot X^i \;\in\; R_p$$
The blind rotation computes $X^{-\tilde{b}} \cdot v(X) \bmod (X^N + 1)$, where $\tilde{b}$ is the encrypted input (after modulus switching). Sample extraction retrieves the constant coefficient, giving $\text{Enc}(f(m))$.
Cost: $n$ CMUX gates, each involving polynomial multiplication in $R_p$ via NTT. Total: $O(n \cdot N \log N)$ field operations.
Use 2: STARK Lookup Argument
The STARK proves that a function evaluation $y = f(x)$ is correct by verifying that $(x, y)$ appears in the table $T_f$.
The lookup argument (as in Plookup or the Tip5 mechanism): the prover commits to a sorted version of the table augmented with the queried values. The verifier checks a polynomial identity relating the original table, the sorted table, and the query. The permutation argument ensures consistency.
Cost: $O(|T_f| \log |T_f|)$ field operations for the lookup argument.
Use 3: Neural Network Activation
The activation function $\sigma: \mathbb{F}_p \to \mathbb{F}_p$ (ReLU, GELU, etc.) is computed by looking up the input in the precomputed table $T_\sigma$.
For Trident's field-native neural networks, the activation is applied elementwise to the output of a linear layer. Each application is one lookup in $T_\sigma$.
Cost: $O(1)$ per activation (table is precomputed), $O(|T_\sigma|)$ to authenticate via STARK.
The Unification
In Trident, a single function table $T_f$ serves all three purposes:
// Define the function once
fn relu(x: Field) -> Field {
if x.balanced_repr() >= 0 { x } else { Field::zero() }
}
// Use 1: Neural network activation (std.nn)
let activated = std_nn::activation::apply(input, relu);
// Proven via STARK lookup argument
// Use 2: FHE bootstrapping (std.fhe)
let test_poly = std_fhe::bootstrap::test_polynomial::from_function(relu);
let encrypted_activated = std_fhe::bootstrap::pbs(encrypted_input, bsk, test_poly);
// PBS evaluates relu on encrypted data
// Use 3: Both simultaneously (std.nn_fhe)
let enc_result = std_nn_fhe::activation_enc::custom_enc(enc_input, relu);
// FHE evaluation + STARK proof + same lookup table
One function definition → three execution modes → one proof mechanism. The lookup table duality is now a trilateral duality: FHE ↔ STARK ↔ Neural Network.
Why This Cannot Exist Outside Trident
For this unification to work, you need:
- Field-native neural networks (no float→field conversion) — Trident's std.nn
- FHE over the same field ($q = p$ = Goldilocks) — Goldilocks TFHE
- STARK proofs over the same field — Triton VM
- Lookup argument that authenticates both FHE PBS and NN activations — Tip5 mechanism
- Smart contract execution environment — neptune / Level 1
No other system has all five. EZKL has (1) partially but not (2). Zama's fhEVM has (2) but not (1) or (3) natively. Cairo/Giza has (3) but not (1) or (2). Ritual has none natively.
Part VI: Concrete Application — Verifiable Private AI Inference
The Complete Pipeline
Setup (one-time):
Model owner:
1. Train neural network in F_p (std.nn, field-native)
2. Commit model weights to Merkle tree (Tip5 hash)
3. Publish root hash on-chain (Neptune smart contract)
4. Generate FHE key pair (std.fhe.key)
5. Publish FHE public key and key commitment on-chain
Inference (per request):
User (client):
1. Prepare input data as F_p elements
2. Encrypt input with model owner's FHE public key
3. Submit encrypted input to on-chain contract (or off-chain compute node)
Server (prover):
1. Load model weights from commitment
2. Execute neural network inference on encrypted input:
- Linear layers: matrix-vector multiply in R_p (homomorphic)
- Activations: PBS with test polynomial (= same lookup table as std.nn)
- Normalization: field arithmetic operations (homomorphic)
3. Generate STARK proof of entire computation:
- Proof that model weights match commitment
- Proof that FHE operations are correct
- Proof that the output ciphertext is the correct result
4. Return encrypted result + STARK proof
User (verifier):
1. Verify STARK proof (< 10ms)
2. Decrypt result with their FHE secret key
3. Obtain inference result
What is guaranteed:
- Data privacy: Server never sees plaintext input (FHE encryption)
- Model privacy: User never sees model weights (committed via Merkle tree, accessed via STARK witness)
- Computation integrity: STARK proof guarantees correct execution
- Post-quantum security: Both LWE (FHE) and STARK (hash-based) are post-quantum
- On-chain verifiability: Anyone can verify the STARK proof
- Economic settlement: Smart contract handles payment based on verified inference
Benchmark Estimates
Based on Thibault & Walter (CCS 2025) results and Packed Sumcheck (2025) improvements:
| Metric | Thibault & Walter | With Packed Sumcheck (est.) | Trident target |
|---|---|---|---|
| PBS proof time | ~20 min | ~2.3 sec | < 5 sec |
| PBS proof size | ~200 KB | ~100 KB | < 150 KB |
| Verification time | < 10 ms | < 10 ms | < 10 ms |
| Full MNIST inference (encrypted) | ~hours (est.) | ~minutes (est.) | < 1 min |
The 534× speedup from Packed Sumcheck makes verifiable FHE bootstrapping practical for real-time applications.
Part VII: Why This Is a Groundbreak, Not Just Integration
The Test
We established the "groundbreak test" earlier:
- ZK: Trident eliminates program→circuit impedance mismatch
- AI: Trident eliminates float→field quantization
- Quantum: Trident eliminates binary→prime gate explosion
FHE: Trident eliminates $R_q$→$\mathbb{F}_p$ proof impedance mismatch.
The pattern is identical. Each domain has computations that naturally reduce to field arithmetic, but existing systems force a translation layer. Trident removes the translation layer by making $\mathbb{F}_p$ the universal computation medium.
What Changed From Our Earlier Analysis
Our earlier analysis was correct that noise in LWE is fundamental — you cannot eliminate it. But the groundbreak was never about eliminating noise. It was about eliminating the impedance mismatch between computation and proof.
The noise stays. The mismatch goes. And the mismatch was the actual bottleneck.
Analogy: in the AI pillar, we don't eliminate neural network computation (it's still expensive). We eliminate the quantization overhead between the network and the proof system. The network itself stays the same; the overhead of converting it to provable form disappears.
Similarly for FHE: we don't eliminate noise management (it's fundamental to security). We eliminate the overhead of converting FHE operations to provable form. The FHE computation stays the same; the overhead of proving it correct drops to zero impedance.
The Four Mathematical Necessities
Complete the mathematical argument:
- ZK proofs require arithmetic circuits over $\mathbb{F}_p$ → programs should be natively $\mathbb{F}_p$
- Neural networks are matrix operations + nonlinear activations → matrix operations natural in $\mathbb{F}_p$, activations via lookup
- Quantum gates are unitary matrices → in prime dimension, unitary matrices are $\mathbb{F}_{p^2}$ matrices
- FHE ciphertexts are polynomial ring elements → when ring is $R_p$, elements are $\mathbb{F}_p$ polynomials; NTT-based operations are native $\mathbb{F}_p$ transforms
Four domains. Four algebraic structures. One field unifies all four when $p$ is chosen correctly (Goldilocks). This is not a coincidence — it's a consequence of $\mathbb{F}_p$ being the minimal algebraically complete structure for reversible bounded computation.
Part VIII: Updated Complete Standard Library
The four-pillar stdlib:
Foundation: std.field, std.math, std.data, std.graph, std.crypto, std.io
Four Pillars: std.nn — Intelligence
std.fhe — Encrypted Computation
std.private — Zero-Knowledge Privacy
std.quantum — Quantum Power
Six Intersections:
std.nn_fhe — Private AI (encrypted inference)
std.nn_private — Verifiable AI (proven inference)
std.nn_quantum — Quantum ML
std.fhe_private — Verified FHE (proven encrypted computation)
std.fhe_quantum — Quantum FHE
std.quantum_priv — Quantum Cryptography
Four Applications:
std.agent — Autonomous verifiable agents
std.defi — Decentralized finance
std.science — Verifiable computational science
std.market — Encrypted model/data marketplace
Total: 6 + 4 + 6 + 4 = 20 modules.
The six intersections form a complete graph over four vertices — every pair of pillars has a meaningful intersection module. This is the combinatorial signature of a true four-pillar architecture.
Part IX: Honest Assessment
What is genuinely new in this document:
-
Recognition that TFHE over Goldilocks + Triton VM STARK + field-native neural networks + quantum computation creates a four-pillar structural unification. No prior work connects all four.
-
The trilateral lookup table duality: PBS test polynomial = STARK lookup argument = neural network activation. This specific connection has not been articulated before.
-
The argument that FHE over Goldilocks constitutes a structural groundbreak (impedance mismatch elimination) comparable to the other three pillars, not merely "integration."
-
The std.fhe standard library design with built-in STARK verification and neural network activation test polynomials.
-
The four-pillar completeness argument: $\mathbb{F}_p$ is the minimal structure for four distinct computational domains, and Goldilocks specifically unifies all four.
What is already known (prior work):
- TFHE can be instantiated over Goldilocks (Thibault & Walter, CCS 2025)
- TFHE bootstrapping can be proven using plonky2 over Goldilocks (same paper)
- Goldilocks NTT accelerates TFHE hardware (FPGA paper, 2025)
- The impedance mismatch between $R_q$ and $\mathbb{F}_p$ is a fundamental problem for verifiable FHE (CRYPTO 2025)
- TFHE programmable bootstrapping evaluates lookup tables (Chillotti et al., 2016-2021)
- STARK lookup arguments authenticate function evaluations (multiple authors)
- FHE can be used for private neural network inference (Concrete ML, Zama)
What remains to be proven:
-
Security analysis: Full security proof for TFHE with Goldilocks parameters in the UC model (Thibault & Walter provide this for their specific instantiation, but broader parameter space needs analysis)
-
Performance benchmarks: Actual benchmark of Trident-compiled FHE operations vs. Concrete/TFHE-rs native implementation. The Goldilocks modulus may have slightly worse FHE performance than power-of-two modulus (as noted by the survey paper), though NTT advantages may compensate.
-
Neural network accuracy: Empirical validation that field-native neural networks with PBS activation functions achieve competitive accuracy on standard benchmarks.
-
End-to-end implementation: Nobody has built the complete pipeline (field-native NN → FHE encryption → encrypted inference → STARK proof → on-chain verification) in a single system.
Open problems:
-
Optimal FHE parameters for Goldilocks: What is the best tradeoff between FHE performance and STARK proof efficiency when both share the same field?
-
Activation function design: Which activation functions have the best properties as both PBS test polynomials and neural network nonlinearities? This is a new optimization problem at the intersection of FHE and ML.
-
Quantum-accelerated bootstrapping: Can quantum NTT provide meaningful speedup for TFHE PBS? The NTT size ($N = 2048$) is small for quantum advantage, but batched PBS over many ciphertexts might benefit.
-
Noise-free QFHE over $\mathbb{F}_p$: If quantum memory matures, can true quantum FHE be built natively in the Goldilocks extension field $\mathbb{F}_{p^2}$?
Part X: The Revised Thesis
The original Trident thesis was a trinity: ZK + AI + Quantum unified over $\mathbb{F}_p$.
The revised thesis is a tetralogy: ZK + AI + FHE + Quantum unified over $\mathbb{F}_p$. See trinity for the three-pillar overview.
Four computational revolutions. Four algebraic requirements. One prime field. One language.
ZK: arithmetic circuits over F_p ── proof
AI: matrix operations over F_p ── intelligence
FHE: polynomial arithmetic over R_p ── encrypted computation
Quantum: unitary matrices over F_{p^2} ── quantum power
All four: native to Goldilocks field p = 2^64 - 2^32 + 1
All four: proven by STARK over the same field
All four: executed in one language (Trident)
All four: settled on one blockchain (Neptune)
The lookup table over $\mathbb{F}_p$ is the Rosetta Stone — the one mechanism that serves as:
- Cryptographic S-box (hash function security)
- Neural network activation (machine learning expressiveness)
- FHE bootstrapping function (encrypted computation)
- STARK authentication (proof correctness)
Four purposes. One table. One field. One proof.
Within the cybergraph, every particle linked by a neuron can carry encrypted payloads verified by this four-pillar stack. The focus mechanism routes cyberlink evaluation through the tri-kernel, where TFHE operations on bostrom state become first-class citizens alongside ZK, AI, and quantum primitives.