Block decomposition method

guillefix 4th November 2016 at 2:43pm

The block decomposition method (BDM) is an extension of the Coding theorem method to measure the complexity of NN-dimensional arrays. As a Network can be expressed via its Adjacency matrix, which is a 2D array, it can be used to measure Network complexity as well.

Original paper

The measure (which we also call BDM) of complexity of array AA is defined as:

K(A)=(r,u)Adlog2(n)+Km(r)K(A) = \sum\limits_{(r,u) \in \mathcal{A}_{d}} \log_2(n) + K_m (r)

where Ad\mathcal{A}_d is the set with elements (r,u)(r,u) obtained when decomposing the array into non-overlapping subarrays of side length dd. rr is one unique square, and nn is its multiplicity (number of times it appears). KmK_m refers to the estimate of Kolmogorov complexity used in the Coding theorem method. However, for NN-dimensional arrays, one uses NN-dimensional Turing machines, or Turmites. Note that log2(n)\log_2(n) is the number of bits needed to specify the number nn.

In the original paper, a set of 2-dimensional Turing machines was executed to produce all square arrays of size d = 4. This is why the BDM is needed in order to decompose objects of larger size into objects for which its Kolmogorov complexity has been estimated.

The order of the graph nodes in the adjacency matrix is relevant for the complexity retrieved by the BDM. This is especially important in highly symmetrical graphs.

In estimating complexity, it is reasonable to consider that the complexity of a graph corresponds to the lowest KmK_m value of all permutations of the adjacency matrix, as the shortest program generating the simplest adjacency matrix is the shortest program generating the graph.

Normalized BDM

The chief advantage of a normalised measure is that it enables a comparison among objects of different sizes without allowing the size to dominate the measure.

MaxBDM is calculated approximately, as described in the paper.

Implementation

An online implementation and code can be found here