Wow, that starts to explain the amazing creativity of rapid evolution (like whale evolution in 10 million years). So how to turn this into software? This seems like it could be a new class of evolutionary algorithm.
I have long had a rather uncomfortable feeling about this idea that many chance mutations eventually produce some that are close to some objective function -- for essentially the same reason given in the article. There's just too many combinatorial possibilities to explore in the rather small time that the universe has been around (I mean, we're comparing like 10^50 to like 10^(10^(...)) here).
But this makes a lot of sense to me. Essentially what he's saying is that input sequences (vectors) pass through some kind of surjective mapping into a lower dimensional space of "networks" (I'll just call them a different space of vectors).
If I had to guess (and this is purely speculation here), this is probably due to some symmetry property.
This article elucidates a much better explanation.
DNA can spontaneously "refactor" itself, i.e. use vastly different internals without changing behaviour. For some "implementations", an appropriate change in behaviour can be much easier than in others.
It is also local, in that equivalent genotypes are just one step away - add or remove a space - and you can continue with further steps. So a set of equivalent genotypes/programs are connected by being adjacent (one "step" away). [\tangent maybe not all the equivalent genotypes are connected]
But this is interesting redundancy, because some of those equivalent genotypes are just one step away from dramatically different phenotypes. This doesn't happen with C whitespace (though maybe it happens with other equivalent implementations, different names, ways of looping, but I can't think of an example]
Using the metaphor in the article, one set of equivalent connected genotypes is like a network of roads, on which you can take steps to move around the system without penalty, because they are all equivalent and each step is neutral. Extending the metaphor for the "interesting" aspect, another set of equivalent connected genotypes with a dramatically different phenotype is like a railway network. Mostly, the two networks are separate, but sometimes, they are very close, so that in one step, you switch to another network, like a railway station. [For correctness, we disallow level crossings, because there both road and rail would have the same phenotype. We could disallow any crossings, making it planar, or introduce the third dimension and have bridge crossings, where the position in 3D is the genotype.]
There would be a great many such networks, with distinct phenotypes.
There would be networks that have no adjacency; but it might still be possible to reach them by moving to intermediate networks (e.g. travel by car then rail then bicycle path then footpath etc), provided the phenotypes of those intermediate networks were neutral or advantageous.
I like both the article's hypotheses: that all complex systems have this property; or that evolved biological systems only have it because evolution is faster with it.
2. The second appeals to me because it helps explain accelerating evolution by the establishment of platforms: e.g. the body-plan collection of genes may have taken a long time to come up with, but once it did, body plan diversity exploded. Though the article complains about the number of body plans possible, it's dramatically fewer than all possible raw sequences. It's configuring a body-plan instead of coding it from scratch. Having many different possibilities is good as it makes it a powerful expressive platform - perhaps like an algebra or programming language, once it gets complex enough, it is very powerful. The key quality is that within this configuration language, the density of "useful" results is higher than without it [e.g. a random configuration is more likely to be useful than random raw code - the platform is somehow specialized to its purpose]
Similarly, perhaps this system of RNA with this quality was not the first to evolve, but several arose... and this is the one that took off.
1. But maybe all complex systems have it too, provided they have redundancy. Perhaps, if there are many sets of connected equivalent genotypes, and each set is very large, there are likely to be many adjacencies between networks? Note: It's not necessary for all networks to have adjacencies, just enough of them. You could imagine varying these properties of the system (number and size of networks, relative to the total space) and come up with parameters that give "enough" adjacent networks [though I'm not quite sure how to define "enough".]
My feeling is that getting those parameters good enough by chance might be pretty rare - something that could take a few billion years over the surface area of a planet to have reasonable chance at...
Making this local, where adjacent points map to the same phenotype can be trivial, e.g. ignoring one dimension of a 10 column row in a database. But an interesting local mapping seems much rarer: where you can change one dimension in one step (and still get the same phenotype), then change a different dimension in the next step, etc, so that the path through space is not just one dimension varying, but a jagged winding path.
This is important for the "path" (or connected genotypes with the same phenotype) to have a large surface area, which increases the chance of contact with the path of another phenotype (and that phenotype has some usefulness).
That's the main problem: that there be adjacency "transfer points" between useful phenotypes. It seems to me that in the huge exponentially large spaces we're talking about, they would be very rare, unless the mapping had some special qualities (which would make sense if the mapping itself had been selected by evolution - so the "selfish gene" is only secondarily in charge; the mapping is the primary one - and possibly, mutation strategies, like sex, and other as-yet undiscovered ones, that might be analogous to factoring or expanding, so the genotype can change dramatically in one step, but remain equivalent).
Sir Isaac Newton was not a simple nor stupid man and he gave honor to the Creator. Just because science has not figured God out, does not deny His existence.
A whole lot of reasons for what allowed the Internet, HN, science, and technology to be where it is at today came out of a nation (silly apparently to the writer) which put "In God We Trust" on everything.
The first person to derive the Hubble constant and one of the main developers of big-bang theory was Georges Lemaître, a devout Catholic who would later become president of the Pontifical Academy of Sciences, who described the idea as "the Cosmic Egg, exploding at the moment of the creation", though he didn't like the Pope relying on it in proclamations. I think it would be fair to call him a creationist.
Also, Hoyle, who was not a creationist, coined the term 'Big-Bang' to take the piss out of Lemaître's idea and promote the steady-state theory, which he preferred. Hoyle however believed that a god was guiding the fine tuning of the constants to help the evolution of life, so was effectively a proponent of intelligent design, while not being a creationist. Weirdly, the two ideas do not have to go together.
I am not saying this from the article offending myself in any way, I am what most people would think of as strongly athiest, though personally I have come to dislike the label. I just think it is a lazy form of signaling that detracts from the argument and also demonstrates an ignorance of the wide variety of religious and philosophical viewpoints held by those throughout the scientific community.
Science isn't trying to figure god out any more than it's trying to figure fairies out. Science focuses its efforts on understanding reality.
Religion wonders about the ultimate meaning of such models. Its language is philosophy.
The first religions bundled model and meaning out of necessity. However, post-Roman Christians were keen logicians and experimenters who laid the foundation of the modern sciences. They would be ashamed of creationists' shallowness.
The big question of science is "How ... ?"
The big question of religion is "Why ... ?"
The two solve different problems and could coexist without so many animosity on each side.
The redemption of capitalism by Christianity demanded a heavily modified version of the religion that basically abandoned all the parts about people looking after each other and created a new ethos in which getting rich was both a duty and a reward.
Given a bunch of plutocrats who would pay well for effective propaganda and some creative but unscrupulous preachers willing to do their dirty work, a distinctly American (and conspicuously un-Christian) version of Christianity was born. It's been polarizing our politics and supporting the concentration of wealth in fewer and fewer hands ever since.
A couple of recent books by Nicole Aschoff and Kevin Kruse have done a good job unpacking this reprehensible hustle. If you have any interest in finding out just how much bullshit has gone into shaping your world view, Elizabeth Bruenig has reviewed both of them here: http://www.newrepublic.com/article/121564/gods-and-profits-h...
You can also read Kruse himself, who has summarized his effort (How Corporate America Invented Christian America) here: http://www.politico.com/magazine/story/2015/04/corporate-ame...
On a separate note, Newton's total disregard of evolutionary theory is not a product of any disagreements he had with the theory itself. Rather, it's due to the fact that died more that 130 years before "On the Origin of Species" was first published. Class dismissed.
Personal attacks and acerbic swipes are not welcome on Hacker News. Please don't do this.
Even a broken clock is right twice a day - imagine then how often a collective of 300 million of those are correct! :)
(Just kidding - Americans aren't more broken than any other group. Of course)