This is false or at least highly misleading. A reader might imagine that the player's eyes track the ball in flight. This is not what happens. A pro player reads the position of the serve and begins moving before the ball is hit. The predictive power of the brain is much more important than the speed and precision the author was trying to highlight here.
The conscious mind processing these things can often get in the way but, in the case of sports, alter the instinctive behaviour into something advantageous. An augmentation of the conscious and subconscious into better performance.
A converse example, being conflict between conscious and sub-conscious, is if you drop a mug of boiling water, the instinctive reaction is to try and catch 'thing you dropped', but the conscious reaction is 'don't burn yourself' so don't catch it and jump backwards to not get splashed.
Very interesting stuff.
https://www.goodreads.com/book/show/48484.Blindsight
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)#Consc...
Beware (or since it's the HN crowd, Recommended!): hard sci-fi.
The parallel is this obsession with rationality over intuition. Though somewhat ironic because black box AI is just accepted.
http://www.eugenewei.com/blog/2013/8/23/the-pitchers-who-con...
Given the recent discoveries about neurons using mRNA capsids to communicate [0] it's not that farfetched to posit that we are really dna computers [1]. The processing time (for new problems) seems human-like: "The slow processing speed of a DNA-computer (the response time is measured in minutes, hours or days, rather than milliseconds)"
The evolutionary argument: as DNA computing is already used by microbes [2] how could the nervous system made of (relatively) dumb neurons compete with that? Synapses still make sense - as a way to request a rna packet and/or inform that it's coming and from where.
One neuron with capability of ~10M pattern matches per second (encoded in dna/rna) would mean that the human brain executes ~2^60 pattern-matching operations per second, utilizing zettabytes of imperfectly copied data. Enough to brute force its way through lots of problems.
Memory as dna would explain high-level memory quirks: each read would be destructive, by splitting dna into rna, interacting with other rna under the presence of appropriate enzymes, then copying and disseminating the resulting rna, transforming the memory each time it's retrieved.
It would also explain urban legends about people's personalities changing to resemble their organ donors in some way - as a donor's memory packets that somehow ended up on the donor's organ and, with the help of immunosuppressants, managed to infect the receiver's brain.
[0] https://www.nature.com/articles/d41586-018-00492-w
What do you mean by this?
It sounds like you are assuming nature requires exactness and are really making a philosophical argument about the nature of computation, namely an np solution in p 'doesnt exist'
> Memory as dna would explain high-level memory quirks
If we are conjecturing than so too could approximate results 'explain high-level memory quirks'
we already have many approximate algorithms that are significantly better than brute force, and I would argue that any read procedure would necessarily be algorithmic, which then would require an explanation as to why this natural process failed to evolve over time
If such an explanation is simply, though arguably counterfactual, 'nature requires exactness' and so is unable to utilize the incremental improvement of evolving algorithms for approximate results, I would argue this implies p!=np because otherwise I think, if it were able, nature would tend toward improving on exactness over the 13B+ years it's been expressing mathematical truths
Being as my intended inference in regard this specific unsolved problem is to develop an algorithm to show p=np, I wonder if the process we refer to as conciousness may be such an algorithm
> neurons using mRNA capsids to communicate
I wonder if the rna is raw memory data or architectural plans for nuerons which, when constructed, express memory
The goal is to reduce n bits of data to x<n bits. Because there are less variables you gain predictive capability of n-x bits. Or stated differently, the goal is to get closer to the kolmogorov complexity of whatever you're trying to model.
Yet it's not possible to compress n bits in the general case. That's because the kolmogorov complexity is a function of your assumed knowledge (assumptions). All you can do is start checking every possible transformation from your assumption starting with the most probable one - the probabilities are based on your knowledge itself.
>we already have many approximate algorithms that are significantly better than brute force
Yes - but that means the algorithm itself, along with its execution, is the shortest (in the used metric, which can include execution time) answer for a particular problem. How do you generate the algorithm in the first place?
Abstract bits sure, 2^4 objects are unable to represent 2^5 objects, simply due to 16!=32
But what does it have to do with'general intelligence' and brute force?
We were originally talking about rna communication.. where does kolmogrov come in? In the data representation in the rna? But what of it when the mechanism that encodes and decodes is unrestricted in its upper bound complexity? If the disparity between the upper bound of memory being encoded and the mechanism encoding it are great enough then that system could 'compress n bits in the general case'
> All you can do is start checking every possible transformation from your assumption starting with the most probable one - the probabilities are based on your knowledge itself.
This just sounds like you're saying every algorithm is brute force but with different possible states due to assumptions
Would you call Euclid's gcd 'brute force with assumptions'? I would argue algorithm is antonym to brute force
> How do you generate the algorithm in the first place?
Ah, I think I see what you're saying.. are you conjecturing the process of evolving conciousness was itself a brute force process?
Where understanding it's underlying process and being able to implement it ourselves, perhaps even more thermodynamically efficient, is inconsequential due to our efforts being only possible by the original conjectured brute force process that allowed us to abstract to such a degree..? This process being the 'assumption' to be appended to the proof?
But this again seems like a philosophical debate.. one of life and negative entropy
How do you define general intelligence? How do you defend the statement that 'intelligence is equivalent to compression.'?
If you would have to consider all of existence as assumption, then a bitwise representation of our own intelligence would be a significantly small subset of the bitwise representation of all things; expressing this would seem to imply some process substantially more efficient than brute forcing every possible state, or luck is real?, or we underestimate the complexity of 'general intelligence' and in actuality the search is ongoing? Or some undiscussed other?
Why wouldn't AIXI count under your definition? (I recognize that AIXI can't be truly implemented, but I don't see where your explanation fits in a "unless you have a halting oracle")
In this case "communicate" is in a narrow sense - this communication is very slow and Arc has more to do with plasticity rather than the electrical response of neurons.
An example is if I say something to you in say japanese, I also need to teach you japanese in the same phrase unless there is shared context. computers run on a very impotent form of language, logic, that is rigorous and general purpose but has no "escape hatch". (basically the sentiment of Godem, Escher, Bach with regards to rigorous systems. )
Similarly, the brain surely has limits to parallelism, for the same sorts of reasons a computer does. You only have one mouth, so if two parts of the brain tried to speak, fully parallelized, you'd get nonsense at best. They have to agree on what to say, which is effectively serialization.
Then there's the combination of tone and the words, accent, body language and every other context embed into our language that produces a very complex meaning, even for one word.
We just need more cores without spiking energy usage.
Being electrochemical, the brain is “slow” compared to a computer (around 10hz). But it’s massively parallel and an interesting combination of digital and analogue.
The major problem is that we are not trained today to recall most of what gets laid down. We are far to dependent of external storage mechanisms, books, videos, etc, that we no longer train ourselves to actually recall and pass on our knowledge.
I am not saying that external memory is bad, just that we should be using all options available to us.
Our understanding of the brain and its mechanisms is very much in the very early stages - we have just scratched the surface.
You will never get the answer if you look at it from a narrow minded perspective.
A brain is equal to billions of processors connected in a highly naturally efficient network functioning almost effortlessly.
Something along these lines seems like a plausible comparison to a brain.
What is a brain is not equal to a processor but a neuron is equal to a processor?
Can we conclude that if a task takes a human Y milliseconds, then a neural network with Y/X layers is sufficient for that task?
The brain consists of many structures, many of them involved in non-cognitive tasks.
Of course, one could go ahead and simply compare the conscious compute power of a human to that of a computer (e.g. FLOPS), but then that number would be something like "1/100 to 5 FLOPS".
Imagine running a program on a computer where everything is somehow memoized, on a system that is always powered on.
IIRC the frequency of a neuron's output is used to encode the magnitude of the signal.
Take for example a neuron transmitting a pain signal, low frequency is used to transmit low levels of pain.
High frequency, extreme pain.
[1] https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model
I guess sleep is the closest thing we have but to me that's like saying drink water to fix tooth decay.
The analogy is flawed, but the brain already has some pretty intricate garbage control methods. As we learn more about them, it'll open up new paths for clinical treatments for a host of problems.
0. https://www.ncbi.nlm.nih.gov/pubmed/24136970
1. https://www.nih.gov/news-events/nih-research-matters/how-sle...
2. http://www.sciencemag.org/news/2018/01/alzheimer-s-protein-m...
Your memories and experiences (potentially one and the same) colour your experience of the world and make you react in different ways, not always beneficial - but the phenomenon wouldn't exist if it wasn't historically, evolutionarily beneficial.
Are you trying to say that brains are quantum computing devices? Maybe, but we don't have any particular evidence for that, so it's on a par with saying "The Flying Spaghetti Monster has a tiny noodly appendage that reaches through hyperspace into each neuron, and that's how we think."
We can observe brains in a lot of different ways - active electromagnetics, chemical sampling, microscopy, MRI... and we can say that injuring certain areas of the brain will impede certain functions. We know that excesses or droughts of some chemicals are associated with emotions, depression, and some diseases.
But we don't have a good model for how brains think, and among the good models we don't have, quantum computing is one of them.