What if the sealed page is encrypted under a key derived from itself?
key = f(sealed_page)
data = Decode(sealed_page, key)
Since the key IS the sealed page (or a function of it), there is no separate nonce, no extractable small helper. The only way to "store the key" is to store the page.
Sealing becomes: find a page that is self-consistent — it decrypts to the right data under a key derived from itself. This is a fixed-point search problem.
Every previous construction had this shape:
sealed = F(data, small_key)
The attacker stores small_key and regenerates sealed from (data, small_key).
The self-referential shape is:
sealed satisfies: sealed = G(data, h(sealed))
where h(sealed) is a digest of the sealed page. There is no small input that determines sealed independently of sealed itself. The page IS the witness to its own validity.
Work over a finite field GF(p) with p ~ 2^32 (32-bit elements). A 64 KiB page has n = 16384 elements of 4 bytes each.
Fix a public evaluation point alpha derived from page_key:
alpha = H(page_key) // a field element
Define the page polynomial hash:
poly_hash(sealed) = sum_{j=0}^{n-1} sealed[j] * alpha^j mod p
This is a single field element (32 bits) that depends on ALL elements of the page.
The encoding relation is:
For each i:
leave_out_hash_i = poly_hash(sealed) - sealed[i] * alpha^i
sealed[i] = data[i] + PRF(leave_out_hash_i, i) mod p
In words: each element of the sealed page equals the data element plus a pseudorandom mask derived from the polynomial hash of all OTHER elements.
The mask for element i depends on leave_out_hash_i, which is the hash of all elements EXCEPT element i. This is not a small separate value — it is a function of essentially the entire page.
An attacker might try to store poly_hash(sealed) as a 32-bit "nonce." But knowing poly_hash does not help:
sealed[i] = data[i] + PRF(poly_hash - sealed[i] * alpha^i, i)
This is an equation where sealed[i] appears on BOTH sides (inside the PRF argument and as the left-hand side). For a pseudorandom PRF, solving for sealed[i] given poly_hash requires brute force over the field: ~2^32 trials per element.
The attacker who stores poly_hash (32 bits per page) and wants to regenerate the page must brute-force EACH element independently: n * 2^32 = 16384 * 4 billion ~ 7 * 10^13 PRF evaluations per page.
At 10^9 PRF evaluations per second: ~70,000 seconds ~ 19 hours per page.
For 10 challenged pages: ~190 hours. Way beyond any PoSt window.
Decode(sealed_page, page_key):
alpha = H(page_key)
full_hash = sum sealed[j] * alpha^j mod p // O(n) field ops
for each i:
leave_out_hash_i = full_hash - sealed[i] * alpha^i // O(1)
mask_i = PRF(leave_out_hash_i, i) // O(1)
data[i] = sealed[i] - mask_i mod p // O(1)
return data
Total decode cost: O(n) field operations + n PRF calls ~ microseconds for a 64 KiB page. Warm.
The key asymmetry: when you HAVE the sealed page, computing leave_out_hash_i is trivial — subtract one known term from the full hash. When you DON'T have the sealed page, computing leave_out_hash_i for element i requires knowing all OTHER elements, which you also don't have.
The polynomial hash couples all elements: knowing 99% of elements plus the full hash tells you the missing 1% (by subtraction). But knowing 0% of elements plus the full hash tells you nothing — each element's equation is independently hard.
This is the most interesting property.
If the attacker stores 99% of the sealed page and discards 1%:
For each missing element i:
- They know all other elements, so they can compute
leave_out_hash_iby summing - Then
sealed[i] = data[i] + PRF(leave_out_hash_i, i)— directly computable! - Cost: O(n) per missing element (to compute the leave-out hash from stored elements)
So discarding 1% and regenerating costs O(n) per missing element — essentially free.
This is a problem. The construction has the classic partial-discard vulnerability for stored-fraction attacks.
Replace the cheap PRF with a SLOW function (Sloth-style):
sealed[i] = data[i] + Sloth_slow(leave_out_hash_i, i) mod p
Decode (honest, has all elements):
data[i] = sealed[i] - Sloth_fast(leave_out_hash_i, i) mod p
Wait — Sloth's slow direction is sqrt, fast direction is square. But here we need:
- Seal: compute sqrt (slow direction) to find the mask
- Decode: compute... the SAME mask. The decode must know the mask to subtract it.
The mask IS Sloth_slow(leave_out_hash_i, i). Computing it requires the slow direction regardless of whether you are sealing or decoding. So decode is also slow. That defeats the purpose.
Unless the mask is structured so that decoding uses a DIFFERENT cheap operation.
What if the self-referential relation uses a Feistel network where the page is split into halves, and each half keys the other?
sealed = (L, R) where:
L = data_L + PRF(H(R), "left")
R = data_R + PRF(H(L), "right")
Wait, this is circular: L depends on R and R depends on L.
But it can be solved iteratively:
guess R_0 = random
L_1 = data_L + PRF(H(R_0), "left")
R_1 = data_R + PRF(H(L_1), "right")
L_2 = data_L + PRF(H(R_1), "left")
...
If PRF is a contraction in some metric, this converges. For a random-oracle PRF, convergence is not guaranteed for large blocks but may work for small internal state.
Decode (has both L and R):
data_L = L - PRF(H(R), "left") // O(n/2) hash + O(n/2) ops
data_R = R - PRF(H(L), "right") // O(n/2) hash + O(n/2) ops
Total: O(n). Fast.
The self-referential property: no small extractable key. Storing H(L) or H(R) (32 bytes each) helps partially, but regenerating the other half requires solving the fixed-point equation, which for a PRF amounts to brute force over the half-page space. For n/2 = 8192 elements of 32 bits each: completely infeasible.
Round 1: split page into 2 halves, each half masks the other
Round 2: split into 4 quarters, each quarter masked by hash of 3 others
...
Round k: split into 2^k pieces, each masked by hash of all others
At the final round, every small piece depends on all other pieces. The coupling is total.
Decode: reverse the rounds. Each round is O(n) because you have all pieces.
Seal: find a fixed point for the coupled system. This is the hard part.
For the multi-round construction, the sealer needs to solve:
Find (sealed[0], ..., sealed[n-1]) such that the self-referential relations hold for all rounds.
This is a system of equations. If the relations are linear (e.g., XOR with PRF output), the system might be solvable by Gaussian elimination. But PRF is nonlinear.
Possible sealing strategies:
-
Iterative relaxation: Start with data, iterate the self-referential map. If the map is contractive, it converges. For random-oracle-like PRFs, contraction is not guaranteed but may hold empirically for suitable constructions.
-
Backbone-derived trapdoor: The backbone provides additional structure that makes the fixed-point equation solvable in polynomial time during sealing. After sealing, the backbone (and hence the trapdoor) is discarded.
-
Layered construction: Apply the self-referential relation in layers. In each layer, half the page is fixed and the other half is derived. This is essentially a Feistel network and can be made to work deterministically.
Strategy 3 (layered Feistel) is the most promising because it avoids the convergence question entirely. A k-round Feistel network on the page:
Seal:
state = data
for round = 0 to k-1:
(L, R) = split(state)
R' = R + PRF(H(L) || round || page_key)
state = merge(R', L) // swap halves
return state
Decode:
state = sealed_page
for round = k-1 downto 0:
(R', L) = split(state) // reverse the swap
R = R' - PRF(H(L) || round || page_key)
state = merge(L, R)
return state
This is deterministic (no search needed), O(k * n), and the sealed page depends on the data in a complex way after k rounds.
But this is an iterative construction! The user explicitly said iterative XOR-like constructions suffer from partial regeneration. And indeed: if you discard element j in the right half, you can reconstruct it from the Feistel round using the left half and the hash.
The self-referential idea is elegant, but it reveals a fundamental tension:
-
If the self-referential relation is TRULY circular (all elements depend on all others simultaneously), finding a valid sealed page is a hard fixed-point problem — potentially too hard even for the honest sealer.
-
If the relation is layered/sequential (Feistel-style), the honest sealer can compute it efficiently, but the attacker can also regenerate discarded elements proportionally.
-
The only escape would be a relation that the honest sealer can solve efficiently via a trapdoor (which is discarded), but without the trapdoor requires brute force per element.
Even if the Feistel version has proportional partial regeneration, the self-referential polynomial-hash version has a unique property that the Feistel doesn't:
For CC sectors (data = 0), the attacker who stores NOTHING must solve the full fixed-point system from scratch.
In a Feistel encoding of CC data, the attacker stores nothing and recomputes by running the Feistel forward. Cost: same as sealing = O(k * n). Not expensive.
In the polynomial-hash encoding of CC data, the attacker stores nothing and must find a self-consistent page. Cost: brute force, many hours.
So the self-referential property specifically helps against the CC-sector attack, which is the hardest case.
What if most of the page uses a Feistel encoding (for efficient sealing), but a small portion (say 128 bits) must satisfy a self-referential hash condition?
sealed_page = Feistel_encode(data, page_key) XOR small_adjustment
condition: H(sealed_page) has specific structure (e.g., first 32 bits = f(sealed_page))
Finding the adjustment requires ~2^32 Feistel encode trials. Decode: one Feistel decode + one hash check. The "nonce" (the adjustment) is distributed across the page via the hash dependency.
But... the adjustment IS a small extractable thing again (or rather, H(sealed_page) is).
| Property | Polynomial-hash self-ref | Feistel self-ref | Hybrid |
|---|---|---|---|
| Decode cost | O(n), very fast | O(k*n), fast | O(k*n + hash), fast |
| Seal cost | Fixed-point search (potentially very hard) | O(k*n), same as decode | O(2^32 * k*n) |
| No extractable nonce? | Yes (truly self-ref) | No (Feistel is deterministic) | Partially |
| CC-sector attack | Must solve fixed-point: very expensive | Rerun Feistel: cheap | 2^32 trials: moderate |
| Partial discard | OK if stored, catastrophic if not | Proportional regeneration | Proportional |
| Honest sealing feasible? | OPEN — possibly infeasible | Yes | Yes |
Can we find a self-referential fixed-point construction where:
- The honest sealer can find the fixed point efficiently (using backbone-derived trapdoor or iterative convergence)
- Without the trapdoor, finding the fixed point requires brute force proportional to the page size
- Decode from the stored page is O(n)
- There is no small extractable sub-witness
This is a well-defined mathematical question. I do not know the answer, but it is more specific and more promising than any static witness direction we have examined.
- Lenstra, Wesolowski, "A random zoo: sloth, unicorn, and trx" — ePrint 2015/366 (slow-hash / Sloth primitive)
- Rivest, Shamir, Wagner, "Time-lock puzzles" — 1996 (self-referential hash puzzles)
- Luby, Rackoff, "How to construct pseudorandom permutations from pseudorandom functions" — STOC 1988 (Feistel networks)