An example of Simplicity bias in discrete systems. See also this gingko tree
See Redundancy calculations in papers in Lempel-Ziv complexity
See Random automata and Evolving automata
Numerical experiments on the simplicity bias in finite-state transducers
On the theory/analysis side, I've been thinking about two questions:
Ideas for understanding the simplicity bias in finite state transducers
To have sufficiently high bias, we need a small non-coding loop.
To have varied output, we need loops outside the non-coding regions. This is so that the time spent in non-coding regions can vary for different outputs.
The slope of the designability/complexity plot corresponds approximately to the Topological entropy of non-coding region. Computed using Determinant of a graph. However, there's also a factor due to the conversion between {KC complexity} and {number of bits spent in non-coding region}. For first FST below, for instance, by computing KC for strings like and , I found that . Then from topological entropy, which is , we find , which is consistent with what I found from the graph, approximately .
Now, this refers to the frequency, which is between and ()
If we average this quantity for , we get , which is close to the found from estimates above.
See this paper about maximum LZ complexity, which goes like where is length of string.
See here for desmos graph.
On point-wise redundancy rate of Bender-Wolf's variant of SWLZ algorithm
On the pointwise redundancy of the LZ78 algorithm
They give results similar to Simplicity bias/Coding theorem method, but for Finite-state sources.
Universal redundancy rates do not exist
Redundancy of the Lempel-Ziv codes
Examples of finite-state transducers and their simplicity bias
See related stuff in Descriptional complexity
Information theory – Coding theory – Algorithmic information theory