Generalization in evolution

cosmos 5th July 2017 at 1:39am
Evolution Generalization

See 1st DTC short project report on overleaf

Phenotypic variations are heavily determined by intrinsic tendencies imposed by the genetic and the developmental architecture [18–21]. For instance, developmental biases may permit high variability for a particular phenotypic trait and limited variability for another, or cause certain phenotypic traits to co-vary [6, 15, 22–26]. Developmental processes are themselves also shaped by previous selection. As a result, we may expect that past evolution could adapt the distribution of phenotypes explored by future natural selection to amplify promising variations and avoid less useful ones by evolving developmental architectures that are predisposed to exhibit effective adaptation [10, 13]. Selection though cannot favour traits for benefits that have not yet been realised. Moreover, in situations when selection can control phenotypic variation, it nearly always reduces such variation because it favours canalisation over flexibility [23, 27–29].

Developmental canalisation may seem to be intrinsically opposed to an increase in phenotypic variability. Some, however, view these notions as two sides of the same coin, i.e., a predisposition to evolve some phenotypes more readily goes hand in hand with a decrease in the propensity to produce other phenotypes [8, 30, 31]. Kirschner and Gerhart integrated findings that support these ideas under the unified framework of facilitated variation [8, 32]. Similar ideas and concepts include the variational properties of the organisms [13], theself-facilitation of evolution [20] and evolution astinkering [33] and related notions [6, 7, 10, 12]. In facilitated variation, the key observation is that the intrinsic developmental structure of the organisms biases both the amount and the direction of the phenotypic variation. Recent work in the area of facilitated variation has shown that multiple selective environments were necessary to evolve evolvable structures [25, 27, 34–36]. When selective environments contain underlying structural regularities, it is possible that evolution learns to limit the phenotypic space to regions that are evolutionarily more advantageous, promoting the discovery of useful phenotypes in a single or a few mutations [35, 36]. But, as we will show, these conditions do not necessarily enhance evolvability in novel environments. Thus the general conditions which favour the emergence of adaptive developmental constraints that enhance evolvability are not well-understood.

To address this we study the conditions where evolution by natural selection can find developmental organisations that produce what we refer to here as generalised phenotypic distributions—i.e., not only are these distributions capable of producing multiple distinct phenotypes that have been selected in the past, but they can also produce novel phenotypes from the same family. Parter et al. have already shown that this is possible in specific cases studying models of RNA structures and logic gates [34]

Watson et al. demonstrated a further result, more important to our purposes here; that the way regulatory interactions evolve under natural selection is mathematically equivalenlt to the way neural networks learn [25].

Here we show that this functional equivalence between learning and evolution predicts the evolutionary conditions that enable the evolution of generalised phenotypic distributions. We test this analogy between learning and evolution by testing its predictions. Specifically, we resolve the tension between canalisation of phenotypes that have been successful in past environments and anticipation of phenotypes that are fit in future environments by recognising that this is equivalent to prediction in learning systems. Such predictive ability follows simply from the ability to represent structural regularities in previously seen observations (i.e., the training set) that are also true in the yet-unseen ones (i.e., the test set). In learning systems, such generalization is commonplace and not considered mysterious. But it is also understood that successful generalisation in learning systems is not for granted and requires certain well-understood conditions. We argue here that understanding the evolution of development is formally analogous to model learning and can provide useful insights and testable hypotheses about the conditions that enhance the evolution of evolvability under natural selection [42, 43]. Thus, in recognising that learning systems do not really ‘see into the future’ but can nonetheless make useful predictions by generalising past experience, we demystify the notion that short-sighted natural selection can produce novel phenotypes that are fit for previously-unseen selective environments and, more importantly, we can predict the general conditions where this is possible. This functional equivalence between learning and evolution produces many interesting, testable predictions (Table 1).


The quality of generalisation can be improved by representing the class in a parameter space or model space (genotype space) that is different from the feature space (phenotype space). Can Evolution Learn Like Neural Networks Learn?

Learning and evolution share common underlying principles both conceptually and formally [16,18–[38_TD$ IF ] 21,26,32,34,37,69]. This provides access to well-developed theoretical tools that have not been fully exploited in evolutionary theory (and conversely suggests opportunities for evolutionary theory to expand cognitive science [80,81]). Learning theory is not just a different way of describing what we already knew about evolution. It expands what we think evolution is capable of. In particular, it shows that via the incremental evolution of developmental, ecological, or reproductive organisations natural selection is sufficient to produce significant features of intelligent problem solving[39_TD$ IF ] .