Mode Scaling: A* Converges
| Model | Modes | A*(truth) | A*(P) | P/truth | Score | α |
|---|---|---|---|---|---|---|
| 6-mode | 6 | 1.136 | 1.020 | 86.1% | 17/20 | 2.0 |
| 8-mode | 8 | 0.290 | 0.277 | 95.5% | 16/16 | 2.0 |
| 10-mode | 10 | 0.302 | 0.290 | 96.1% | 13/14 | 2.0 |
| 12-mode | 12 | 0.328 | 0.307 | 93.8% | 14/14 | 2.0 |
| 16-mode | 16 | 0.347 | 0.328 | 94.6% | 14/14 | 2.0 |
| 20-mode | 20 | 0.347 | 0.328 | 94.6% | 14/14 | 2.0 |
| 24-mode | 24 | 0.347 | 0.328 | 94.6% | 14/14 | 2.0 |
A* converges to 0.347 by 16 modes. Adding modes 16→20→24 does not change the threshold. Regularity holds for all initial data with amplitude below A*. The framework is mode-count invariant.
Key Results
A Holistic Scaffold Framework for Navier-Stokes Regularity
36 theorems. 103 experiments. 7 Galerkin truncations solved. A* converges to 0.347. Four-step path to formal proof identified and verified.
Download Paper (PDF) →A* Converges to 0.347
7 Galerkin truncation models (6-24 modes) solved. Regularity threshold positive and convergent. 14/14 perfect classification at 16+ modes.
View results →4/4 Points Verified
Feedback loop structural (9/9 params). A* positive universally (12/12 + 3/3 ICs). Scaffold chain complete (20/20). Doubling time = BKM (6/6).
View verification →I-Ratio Theorem
For K competing objectives, the interaction ratio
\[ I(\theta) = \frac{\displaystyle\sum_{i < j} g_i \cdot g_j}{\displaystyle\sum_i \lVert g_i \rVert^2} = -\frac{1}{2} \]
if and only if the system is at equilibrium. Holds for any \( K \geq 2 \).
Full proof →Desire as Bayesian Regulariser
A desire that partially contradicts the evidence stream acts as a regulariser, improving calibration by 31%. Misaligned desires outperform aligned desires at ALL observation horizons.
Full proof →Convergence Score as Chaos Detector
The score \( S = 1 - \frac{\text{late drift}}{\text{early drift}} \) correctly identifies chaos boundaries in the logistic map, locating the Feigenbaum point at \( r \approx 3.57 \). Model-free.
Full proof →B-Flow Convergence
Gradient descent on the balance residual \( B(\theta) = \frac{\|\sum g_i\|^2}{\sum \|g_i\|^2} \) converges to \( 375 \times 10^9 \) higher precision than loss-flow.
Full proof →Cosine-Scaled Projection
Graduated conflict resolution: \( \text{scale} = \alpha \cdot |\cos(g_i, g_j)| \). 100% conflict resolution vs Riemannian PCGrad's 66.5%. Also provides implicit exploration.
Full proof →Adversarial Regularisation
Partial opposition improves outcomes in EVERY domain: beliefs (31%), Prisoner's Dilemma (83.5% Pareto vs 33% Nash), GANs, and learning schedules.
Full analysis →Conjectures
Reproducible Experiments
All experiments are implemented in Simplex and can be compiled and run independently. 188 compiler math tests validate correctness.
| File | Domain | Validates | Result |
|---|---|---|---|
| exp_contraction.sx | Core | Theorem 1 | 5/5 subsystems contract |
| exp_gradient_interference.sx | Core | Theorem 2 | 100% resolution |
| exp_lyapunov.sx | Core | Theorem 3 | 0% violations |
| exp_invariants.sx | Core | Prop 3.5 | 0 violations / 20K steps |
| exp_timescale.sx | Core | Theorem 1 | 100% separation |
| exp_interaction_matrix.sx | Core | Theorem 4 | Converges in 5 cycles |
| exp_convergence_order.sx | Core | Theorem 5 | S → 0.000264 |
| exp_anima_deep.sx | Cognitive | Theorems 6, 7 | 55% belief improvement |
| exp_anima_correlated.sx | Cognitive | Conjecture 7.1 | Desire regularisation |
| exp_chaos_boundary.sx | Dynamics | Theorem 12 | Feigenbaum detected |
| exp_s_vs_lyapunov.sx | Dynamics | Prop 12.1 | S-λ complementarity |
| exp_nash_equilibrium.sx | Games | Theorem 11 | 83.5% Pareto |
| exp_iratio_proof.sx | Core | Theorem 13 | 138/138 pass |
| exp_iratio_proof_statistical.sx | Core | Theorem 13 | 70/70 pass |
| exp_balance_residual.sx | Core | Theorem 14 | 375B× precision |
| exp_iratio_applications.sx | Cross-domain | Theorem 13 | 5 domains validated |
| exp_belief_cascade.sx | Cognitive | Conjecture 6.4 | Chain discovery |
| exp_skeptical_annealing.sx | Cognitive | Conjectures 6.3, 6.5 | Skeptic wins always |
| exp_memory_dynamics.sx | Cognitive | Conjectures 6.6-6.10 | Forgetting learnable |
| exp_sensitivity.sx | Robustness | Props 7.1-7.4 | 3 OOM stable |
| exp_code_gates.sx | Code | Theorem 8 | S → 0 at step 50 |
| exp_compiler_passes.sx | Compiler | Theorems 9, 10 | Per-program adaptation |
| exp_structure_discovery.sx | Topology | Gradient topology | Constraint graph found |
| exp_equilibrium_mapping.sx | Optimization | Theorem 14 | B-flow validated |
Run Any Experiment
Every experiment is a standalone .sx file — download it from the experiment page, then:
# 1. Compile the Simplex source to LLVM IR
./sxc experiment.sx -o experiment.ll
# 2. Link with the runtime into a native binary
clang -O2 experiment.ll standalone_runtime.c \
-o experiment -lm -lssl -lcrypto
# 3. Run
./experiment
Need the compiler? Download pre-built binaries or build from source.
Full Setup Guide (first-time users) →Citation
@article{higgins2026unified,
title={Unified Adaptation Theorem: Convergence of Composed
Adaptive Systems via Interaction Matrices and
Higher-Order Convergence Diagnostics},
author={Higgins, Rod},
year={2026},
url={https://lab.senuamedia.com/papers/unified-adaptation-theorem.html}
}