Thought Experiment: Say an isolated child was given the primitives of your choice, and nothing more. What's the likelihood that it'll invent calculus on its own during its lifetime? Or any other significant human discoveries/inventions. Will it figure out human flight on its own?
Leave that. Say there are no teachers/industry, except just all the textbooks. Still what is the likelihood of it teaching itself calculus or how to build a flying machine based on mere reading material?
Those may be too complex to invent from scratch in a few decades.
Long intellectual collaborations are a common good. You don't need to consider them estranged from their users, though. By virtue of emotional audit, knowledge I accept is assimilated by me, becomes mine. It was made by my allies, essentially by past instances of myself - nothing external here. (More like allies if it is precursor that I had to non-trivially modify for my needs, and more like self if it is a product that I accepted verbatim.) This situation doesn't contradict my sense of individualism. (In my view, personal identity is defined by preferences. So if we have same preferences, we are one. And it seems obvious that people with similar preferences will create functionally similar designs.)
So even if a child would in principle want calculus but wouldn't create it in a lifetime, they would take the steps they can in the direction that they're interested in. The distance would be dependent on capability, but more than "just a few ideas", as you say, wouldn't be improbable. Then the child may try and appoint those that the child seems worthy of receiving child's work, or child may publish it for everyone. That someone may deceive the child about self is an unsolvable flaw here. This may change in future when people solve old age mortality.
> Say there are no teachers/industry, except just all the textbooks.
To use books, one needs the ability to read. Without somebody to teach one to read, there is a small chance of success if a book that teaches written language and supposes no knowledge of written language (in usual meaning) is created. It would rely on a way to read the book and learn that would be figured out by the person, and this would work only if the person were very curious. Extent to which written language would be learned would depend on how well the interaction with the book works and how smart the person is; probability of the whole thing working seems very low, but depends on those too. Speechless video game tutorial theory would be mainly applicable here, only the medium here is also much more limited.
When you can read, I believe that text is sufficient to transfer knowledge. Quality subject books together with some books about effective learning may be sufficient, however, even if one finds a big library with all the knowledge early in life, they may simply not know what they want in terms of the index of the library, a guide book in front of everything may solve that. I don't know if a lifetime is enough to actually build a flying machine given only raw materials.
Yes, that is the source for my assertion that culture/collective thought/pre-computed results is of primary importance ("essential"). You need way more than mere primitives to get upto any useful level of knowledge in a given lifetime.
> By virtue of emotional audit, knowledge I accept is assimilated by me, becomes mine. It was made by my allies, essentially by past instances of myself - nothing external here.
In other words, you are "sampling" something from the common culture based on an "emotional audit", and then tweaking the sample to suit your "preferences"? In other words:
P = Bundle of preferences (sampling biases that an emotional audit will check for)
I_1 = State of Identity
T = Transformation function that converts a sample of common good to suit the bundle of preferences
I_2 = State of Identity after integrating the result of transformation
I_1 + T(P(CG)) => I_2 (your formula)
i.e previous_identity_state + transform(select(common_good)) = new_identity_state
You argue that if, transform(select1(common_good)) = select2(common_good), then select1 = select2 i.e preference1 = preference2 [1]
The problem with this model is that it is dualistic; you envision immutable (?) preferences and mutable identity states (and obviously mutable common good which is a super set of identity states and 'other stuff'). The common good is getting manipulated in a distributed and concurrent way all the time, so even if the preferences remain immutable, the common good keeps changing, and therefore [1] wouldn't work out (the actual equation will have common_good1 and common_good2)... I think all dualistic ideas of defining the mind tend to get into this sort of trap.
That is, the following will not remain true ("create functionally similar designs") due to changes/updates in the common good (especially across time):
> So if we have same preferences, we are one. And it seems obvious that people with similar preferences will create functionally similar designs.)
I think a better model is to consider all localized preferences as part of the larger common good. There is just one common good (due to "dependency" of thoughts), across time and space, out of which everything emerges ("collective thought"). Such a model wouldn't admit any sort of immutable identity within the realm of thought.
That is just because "useful level of knowledge" is required to participate in society as I want to - most low-hanging fruit has already been collected. If you're at the frontier, you're immediately productive.
> In other words:
Sorry, I phrased a little incorrectly. It's more like Filter(Mutate(AmbientCulture)), that is, I selectively copy or mutate everything I see into stuff that gets accepted by the audit, after which it is my knowledge. (Mutate creates some modifications of memes as I am able, and Filter removes unsuitable ones; but computationally it is probably a one-stage directed process.) I would call it knowledge state, not "identity state", my identity is immutable too. Yes, preferences are immutable; preferences = identity.
> You argue that if ...
If I understand you well, your criticism is that I use momentary artifacts created in dependency on a given environment to establish identity independent of environment.
> the following will not remain true ("create functionally similar designs") due to changes/updates in the common good
Culture changes continuously. A given distributed identity would create a needed design over time, at each moment relying only on the current state of its own work. (It follows that identities can be nested: an identity may create a meme that will be owned by any identities that contain that identity.) If the work is not synchronized across space, work may fork and diverge, but the differences would not be functionally significant in the end.
If something suppresses your work, you should defend against this, or you will be screwed too.