There's little sense in ignoring the whole basic mode of operation, physics, chemistry and biology of the brain in order to analogise it to another system without any of those properties.
This, at best, provides a set of inspirations for engineers -- it does nothing for science.
Sure there is. People had a feel for it back in "clockworks" times, nowadays we have a much better grasp because of progress of physics and math, particularly CS - mode of operation is an implementation detail. Whatever the mode, once you understand the behavior enough to model it in computational terms, you can implement it in anything you like - gears and levers, pistons, water flowing between buckets, electrons in silicon, photons going through lenses, photons diffusing through metamaterials, sound waves diffusing through metamaterials - and yes, also via a person locked in a room full of books telling them what to draw in response to a drawing they receive, and also via a billion kids following a game to the letter, via corporate bureaucracy, via board game rules, etc.
Substrate. Does. Not. Matter.
The only thing limiting your choice here is practical one. Humanity is getting a good mileage out of electrons in silicon, so that's the way to go for now. Gears would work too, they're just too annoying to handle at scale.
Of course, today we don't have a full understanding of biological substrate - we can't model it fully in terms of computation, because it's a piece of spontaneously evolved nanotech and we barely begun being able to observe things at those scales. We have a lot of studying in front of us - but this is about learning how the gooey stuff ticks, what does it compute and how. But it's not about some new dimension of computation.
It only doesnt matter for counting a system as implementing a pure algorithm, ie., one with no device access. This is an irrelevant theoretical curiosity.
Electronic computers are useful because they're electronic -- they can power devices, and modulate devices using that power. This cannot be done with wood, or most anything else.
"Substrate doesnt matter" is, as a scientific doctrine pseudoscience, and as a philosophical one, theological.
The causal properties of matter are essential to any really-existing system. Non-causal, purely formal properties of systems which can be modelled as functions from the naturals to the naturals (ie., those which are computable) are useless.
On the contrary. That's an implementation detail. You can "power devices, and modulate devices" by having a clockwork computer with transducers at the I/O boundary, converting between electricity and mechanical energy at the edge. It would work exactly like a fully electronic computer, if built to implement the same abstract computations - and as long as you use it within its operational envelope[0], you wouldn't be able to tell the difference (except for the ticking noise).
> The causal properties of matter are essential to any really-existing system. Non-causal, purely formal properties of systems which can be modelled as functions from the naturals to the naturals (ie., those which are computable) are useless.
Yes and no. Of course the causal properties of matter... matter. But the breakthrough in understanding, that came with development of computer science and information theory, is that you can take the "non-casual, purely formal" mathematical models of computation, and define some bounds on them (no infinite tapes), you can then use the real-world matter to construct a physical system following that mathematical model within the bounds, and any such system is equivalent to any other one, within those bounds. The choice of what to use for actual implementation is done on practical grounds - i.e. engineering constraints and economics.
It's how my comment reached your screen, despite being sent through some combination of electrons in wires, photons down a glass fibre, radio signals at various frequencies - hell, maybe even audio signals through the air, or printouts carried by pidgeons[1]. Computer networks are a living proof that substrate doesn't matter - as long as you stick to the abstract models and bounds described in the specs for the first three layers of ISO/OSI model, you can hook up absolutely anything whatsoever to the Internet and run TCP/IP over it, and it will work.
I bet there's at least one node on the Internet somewhere whose substantial compute is done in a purely mechanical fashion. And even if not, it could be done if someone wanted - figuring out how to implement a minimal TCP/IP stack using gears and switches is something a computer can do for you, because it's literally just a case of cross-compilation.
--
[0] - As opposed to e.g. plugging 230V AC to its GPIO port; the failure modes will be different, but that has no bearing on either machine being equivalent within the operational bounds they were designed for.
The deepest fundamental structures in the brain[0] are quantum fields, which are also the deepest fundamental structures in everything else.
There is no known quantum field of "soul" or "intelligence".
The right abstraction is higher, and could still be a whole lot of things; but as maths can be implemented in logic, which can be implemented in electronics or clockwork or hydraulics, it doesn't matter what analogy is used — and my mild disagreement here is that such inspiration has been useful and gotten us this far.
[0] that we know of
I appreciate there's some (imv strange) sense of 'intelligence' where 'finding the right puzzle piece' counts. I cannot fathom why we care about such a notion, and it seems to have almost nothing to do with what we do care about re 'intelligence'.
We care about that thing animals do, that thing which some do better than others. That thing which evolution brought about for (rapid) adaptive fitness to one's environment.
'Everything else is stamp collecting'
We already have a perfectly good understanding of puzzles and their solutions -- animals are their inventors
Intelligence isnt in the solution to a puzzle it's in its design, and especially, in what one does when one cannot solve it -- ie., how one adapts
The csci view of 'intelligence' is an act of self-aggrandising, it turns out to be: csci!
This is none-sense.
That said, the way you're using biological evolution in your comment sounds as much like a strange analogy as all of the others: we may have some genetically programmed responses to snakes (bad) and potential mates (good), but we can also say that a loss of hydraulic pressure in our brain is a stroke, and use electrical signals to both read from and write to the brain.
What we evolved to think, while interesting from a social perspective, seems to me like the least interesting part of our brains from an AI perspective — it's the bit that looks like a hard-coded computer program, not learning, on the scale of a human life and seen from within.