Back to Experiments

S-Entropy Connection

Hypothesis

The convergence score \(S\) is related to, but distinct from, Shannon entropy \(H\). Three specific sub-hypotheses: (1) \(S\) and \(H\) are negatively correlated (convergence reduces uncertainty). (2) \(S\) might be proportional to \(-dH/dt\) (rate of entropy decrease). (3) The entropy of the interaction matrix itself may reveal whether convergence corresponds to specialisation (low entropy) or uniformity (high entropy).

Method

  1. Run the logistic map at multiple \(r\) values from 2.5 to 4.0. At each, compute the time series of \(S\) and the Shannon entropy \(H\) of the empirical distribution of iterates (binned into 50 bins over \([0,1]\)).
  2. Compute Pearson correlation between \(S\) and \(H\) across all \(r\) values.
  3. Compute \(-dH/dt\) (finite differences) and correlate with \(S\) to test the "S = negative entropy rate" hypothesis.
  4. Compute the matrix entropy \(H_M = -\sum_{ij} p_{ij} \log p_{ij}\) of the normalised interaction matrix at each step. Correlate \(H_M\) with \(S\).

Results

S vs Shannon Entropy

\(r\)RegimeMean \(S\)Mean \(H\)
2.5Fixed point+0.980.12
3.2Period-2+0.610.69
3.5Period-4+0.331.39
3.57Onset of chaos+0.042.81
3.7Chaotic-0.183.42
4.0Full chaos-0.523.91

Pearson correlation: \(r(S, H) = -0.457\). Moderate negative — higher entropy regimes tend to have lower \(S\), but the relationship is not tight.

S vs \(-dH/dt\)

ComparisonPearson \(r\)Interpretation
\(S\) vs \(H\)-0.457Moderate negative
\(S\) vs \(-dH/dt\)+0.153Weak positive

\(S\) is not simply \(-dH/dt\). The correlation is weak (\(r = 0.153\)), rejecting the hypothesis that \(S\) measures the rate of entropy decrease. \(S\) captures something different from entropy dynamics.

Matrix Entropy vs S

\(r\)Mean \(S\)Mean \(H_M\)
2.5+0.981.38
3.2+0.611.21
3.5+0.331.05
3.57+0.040.82
3.7-0.180.64
4.0-0.520.41

Pearson correlation: \(r(S, H_M) = +0.611\). Positive — when \(S\) is high (convergence), the interaction matrix is more uniform (higher entropy). When \(S\) is low (divergence), the matrix is more concentrated (lower entropy, specialised).

Key finding. Convergence = uniformity, not specialisation. High \(S\) corresponds to energy being spread uniformly across interaction channels (\(H_M\) high). Divergence concentrates energy into a few dominant interactions (\(H_M\) low). This is the opposite of what a naive "convergence = order" intuition would predict.

Analysis

  • S and H are related but distinct. The moderate negative correlation (\(-0.457\)) confirms that \(S\) and Shannon entropy share information about regime type, but they are far from redundant. \(S\) captures force-balance structure that entropy does not.
  • S is not entropy rate. The weak correlation with \(-dH/dt\) (\(r = 0.153\)) rules out the tempting identification of \(S\) with negative entropy production. \(S\) is a convergence diagnostic, not a thermodynamic quantity.
  • Convergence = uniform interactions. The positive \(S\)-\(H_M\) correlation (\(r = +0.611\)) reveals that convergent dynamics spread energy uniformly across the interaction matrix. Divergent dynamics concentrate energy into a few channels. This provides a structural explanation for why \(S > 0\) signals stability: uniform force distribution is harder to destabilise than concentrated force distribution.
Caveats. (1) Correlations are computed across the logistic map parameter sweep; other dynamical systems may show different relationships. (2) Shannon entropy depends on binning resolution (50 bins used here). (3) Matrix entropy is computed from a finite-dimensional interaction matrix; the continuum limit may differ. (4) Correlation does not imply causation.

Conclusion

The convergence score \(S\) has a moderate negative correlation with Shannon entropy (\(r = -0.457\)) but is emphatically not the negative entropy rate (\(r = 0.153\)). The most informative relationship is between \(S\) and the interaction matrix entropy (\(r = +0.611\)): convergence corresponds to uniform energy distribution across interaction channels, while divergence corresponds to concentrated, specialised interactions. This overturns the naive "convergence = order" intuition and provides a structural explanation for the diagnostic power of \(S\).

Reproducibility

../simplex/build/sxc exp_s_entropy.sx -o build/exp_s_entropy.ll

OPENSSL_PREFIX=$(brew --prefix openssl)
clang -O2 build/exp_s_entropy.ll \
  ../simplex/runtime/standalone_runtime.c \
  -o build/exp_s_entropy \
  -lm -lssl -lcrypto -L${OPENSSL_PREFIX}/lib

./build/exp_s_entropy

Related