I would share this with him, but I imagine it would go completely over his head.
> The implied abstraction, in which time has disappeared from the picture, is however beyond the computing scientist imbued with the operational approach that the anthropomorphic metaphor induces. In a very real and tragic sense he has a mental block: his anthropomorphic thinking erects an insurmountable barrier between him and the only effective way in which his work can be done well.
I'm now doing sysadmin work, and do a little bit of scripting here and there while working on teaching myself Python and C. Do you have any advice to avoid the kind of anthropomorphic thinking you mentioned your coworker does? The last thing I want to do is learn the "wrong way" and potentially struggle to rethink how to code.
You have to accept that Japanese grammar might have some basic concepts in common with the grammar of western languages (verbs, objects, adjectives, etc) but that really doesn't get you far.
(for japanese learners reading this comment, the article that pushed the first domino for me was this one: http://www.guidetojapanese.org/blog/2007/09/03/repeat-after-... )
Regarding your question about programming, I'd encourage you to read both The Little Schemer and Structure and Interpretation of Computer Programs.
Anthropomorphized code reads like someone reading off a gigantic checklist that keeps getting longer and longer. Each time a new problem comes up, we solve it by adding another "If A then B" except it's something like this
if foo.bar.baz[0][1]['value']
if (foo.bar.baz[0][1]['value'] == 'undocumented business rule')
globalVariable = false # why? who knows
else
globalVariable = true
globalVaraiable = foo.bar.baz[0][1]['value']
Relying on “If A then B” quickly get out of hand. The number of possible paths that your code can take grows quickly, even exponentially depending on the structure of your program. I don't want to be dogmatic, but always be asking yourself if you can structure your code as to avoid relying heavily on conditionals, especially nested ones.Focus on the problem you're trying to solve, not the operational details of the language or other tools you're using. However, keep exposing yourself to more languages, that are different from each other in semantics, run-times. Learn about different programming paradigms: declarative, imperative, functional, etc, focusing on semantics and not syntax.
So for programming the same gimmick would amount to, rather than just building code out of the constructs you're most comfortable with, also read code written by experts and try to imitate the patterns you find there.
But in practice, it seems to be the opposite: most people have a hard time thinking abstractly. We need analogies to make sense of things.
For example, there was an experiment [1] showing that even basic logic is easier to handle if it's thought of as "detecting cheating" than as a pure logic problem.
Using just-so stories does nothing to aid in the methodical derivation of conclusions from premises; and can serve to obfuscate and confuse the issues.
I understand the aspiration, but I always think of the counterpoint implicit in Knuth's famous statement: "Beware of bugs in the above code; I have only proved it correct, not tried it."
We somehow managed to repurpose it for mathematics in a very short amount of time on the evolution timescale.
So while we are capable of abstraction, our brains work better when we rethink the problem in terms of throwing rocks. Anthropomorphism help us make our primitive mind and higher functions cooperate.
It does fall faster, if you drop both stones in the atmosphere; the heavier stone has more mass per unit of surface area, to better overcome a constant level of air resistance.
Aristotle's physics were based on pretty accurate observation of the pre-industrial world, although they're surprisingly short on first principles. The really serious shortcomings were mostly related to impetus, the Aristotelian theory of motion -- it accidentally models friction well for objects in continuous contact with the ground, but it's very hard for Aristotelian physics to explain why an arrow keeps flying after it leaves the bowstring, and even gains speed after its apogee.
Not necessarily; for one thing, if the masses of the two stones are not significantly different, the difference in the effects of air resistance would be unmeasurable given the instruments of the day.
For another, the shapes of the stones are important. One with a much greater sectional density, oriented correctly, could fall faster than the other, even if it were lighter.
But that's just nitpicking. If I remember correctly, Galileo used inclined planes because there were only imprecise water clocks available to measure the passage of time. His choice of apparatus was brilliant :)
The article lost me when the author blithely claimed that it was Galileo who dropped two <droppable-item>s from the Leaning Tower of Pisa. The author clearly has no idea what happens when you do that; while Galileo was far too good at choosing the experimental setup that would give him the answer he wanted, while ignoring any such setup that would prove him wrong, to make such an obvious mistake.
I'm actually curious why people believe one emergent system does (eg, people) but another doesn't (eg, evolution/biosphere).
What lets you determine when a composite object or pattern in an automata has crossed that threshold?
There isn't really a threshold beyond that pattern or object somehow communicating to you that it does indeed possess a will or some form of consciousness. The default assumption is to assume it doesn't until it shows it does instead of invoking an animist world view that imbues a spirit to every complex phenomenon (weather, death, etc.)
I'm in agreement with you though in general. It is arrogant to think that humans are at the terminal end of emergent complexity. Maybe our minds are too limited to conceive of something arising from a global or galactic scale.
Are our individual cells aware of the person?
We have good reason to think that anything like "will" (leaving aside "free will" here, and just taking about things like the ability to achieve ends by developing plans and utilising information about the state of the environment) requires specific information processing capabilities. Capabilities that a process like natural selection does not have.
It is?
I only ever see this come up in layman's discussions of evolution. I've never seen the anthropomorphism of evolution come up among actual biologists / grad students.
Talk about bias!