Math is a perfect field for machine learning to thrive because theoretically, all the information ever needed is tied up in the axioms. In the empirical world, however, knowledge only moves at the speed of experimentation, which is an entirely different framework and much, much slower, even if there are some areas to catch up in previous experimental outcomes.
Having a focus in philosophy of language is something I genuinely never thought would be useful. It’s really been helpful with LLMs, but probably not in the way most people think. I’d say that folks curious should all be reading Quine, Wittgenstein’s investigations, and probably Austin.
One domain of knowledge I think you have yet to mention. We can talk about fundamentally computationally hard problems. What comes to mind regarding such problems that are nevertheless of practical benefit are physics simulations, material simulations, fluid simulations, but there exist problems that are more provably computationally difficult. It seems to me that with these systems, the chaotic nature is one where even if you have one infinitely precise observation of a deterministic system, accessing a future state of the system is difficult as well, even though once accessed, memorization seems comparatively trivial.
I’d say the two most important topics here are philosophy of language (understanding meaning) and philosophy of science (understanding knowledge).
I’ve already mentioned the language philosophers in an edit above, but in philosophy of science I’d add Popper as extremely important here. The concept of negative knowledge as the foundation of empirical understanding seems entirely lost on people. The Black Swan, by Nassim Taleb is a very good casual read on the subject.
I have a very vivid dream life myself, and I have had 'out of body' experiences starting at age 6, so maybe I read it sometime after having some of those weird experiences and that's possibly why I remember reading it.
Not really; the normal way that math progresses, just like everything else, is that you get some interesting results, and then you develop the theoretical framework. We didn't receive the axioms; we developed them from the results that we use them to prove.
If you want to change the axioms to better reflect some aspect about life, that's all well and good, but everything will still fall out of the new axioms.
That doesn't make for a perfect field, or even a good field, for machine learning to thrive in; what we care about is finding useful results. Starting with arbitrary axioms is a good way to prevent that from happening.
Compare this discussion from an algebra textbook I've been reading recently:
-----
The possibility of combining two elements of A(S) to get yet another element of A(S) endows A(S) with an algebraic structure. We recall how this was done: If f, g ∈ A(S), then we combine them to form the mapping fg []. We called fg the product of f and g, and verified that fg ∈ A(S), and that this product obeyed certain rules.
From the myriad of possibilities we somehow selected four particular rules that govern the behavior of A(S) relative to this product.
[...]
To justify or motivate why these four specific attributes of A(S) were singled out, in contradistinction to some other set of properties, is not easy to do. In fact, in the history of the subject it took quite some time to recognize that these four properties played the key role. We have the advantage of historical hindsight, and with this hindsight we choose them not only to study A(S), but also as the chief guidelines for abstracting to a much wider context.
-----
It takes work, a lot of work, to determine what axioms you should use. Where do you think the information necessary to make that determination comes from?
You're talking about an inductive framework in which we create deductive frameworks that model the world we live in as best as we can tell. This is very, very hard work. The flipping back and forth between the inductive framework and the deductive framework -- between modeling reality and discovering and testing new aspects -- is the heart of what knowledge is, but again, this is the dance between the two frameworks.
I'm just talking about that deductive framework, that exists, locked in it's by definition arbitrary axioms. It's in that framework that machine learning should thrive, because all of the propositions possible will fall out of the axioms.
In this sense mathematicians are board-game designers. It matters less how well the game describes nature's reality than how fun it is to play the game that results.
Now if you were a physicist, the game has already been design by some other mechanism and you have to probe to understand the rules and discover its consequences.
There's also intuitive knowledge btw.
Anyway, the recent developments of AI make a lot of very interesting things practically possible. For example, our society is going to want a way to reliably tell whether something is AI generated, and a failure to do so pretty much settles the empirical part of the Turing test issue. Or alternatively if we actually find something that AI can't reliably mimic in humans, that's going to be a huge finding. By having millions of people wonder whether posts on social media are AI generated, it is the largest scale Turing test we have inadvertently conducted.
The fact that AI seems to be able to (digitally) do anything we ask for is also very interesting. If humans are not bogged down by the small details or cost of implementation concerns, and we can just say what we want and get what we wished for (digitally), what level of creativity can we reach?
Also once we get the robots to do things in the physical space...
(1) "intuitive knowledge" - whether or not you want to take "intuitive knowledge" as a type of knowledge (I don't think I would) is basically immaterial. The deductive-inductive framework dynamic is for reasoning frameworks, not knowledge. The reasoning frameworks are pointed in opposite directions. The deductive framework is inherited from rationalist tradition, it's premises are by definition arbitrary and cannot be justified, and information is perfect (excepting when you get rare truth values, like something being undecidable). Inductive/empirical framework is quite the opposite. Its premises are observations and absolutely not arbitrary, the information is wholly imperfect (by necessity, thanks Popper), and there is always a kind of adjustable resolution to any research conducted. Newton vs Einsteinian physics, for example, shows how zooming in on the resolution of experimentation shows how a perfectly workable model can fail when instruments get precise enough. I'll also note here that abduction is another niche reasoning framework, but is effectively immaterial to my point here.
(2) The Turing Test is not, and has never been, a philosophically rigorous test. It's effectively a pointless exercise. The literature about "philosophical zombies" has covered this, but the most important work here is Searle's "Chinese Room."
>The fact that AI seems to be able to (digitally) do anything we ask for is also very interesting.
I don't even know how to respond to this. It's trivially, demonstrably false. Beyond that, my entire point is that philosophy of language actually presents so hard problems with regards to what meaning actually is that might end up creating a kind of uncertainty principle to this line of thinking in the long run. Specifically Quine's indeterminacy of translation.
I thought I agreed with most of your original comment that I replied to, and here you are ready to fight. I'm not even sure what you're fighting, and I certainly didn't have in mind the things you responded to.
Well, I guess I learned not to talk to philosophers (especially those who went through school) the hard way. Sometimes I forget my lesson and it's always sad when this happens. Have a good day.
And the literature about philosophical zombies is contentious, to say the least, and much of it is also among the worst arguments in philosophy--Dennett confided in me that he thought it set back progress in Philosophy of Mind for decades, along with that monstrosity of misdirection, "the hard problem". Chalmers (nice guy, fun drunk at parties, very smart, but hopelessly deluded) once admitted to me on the Psyche-D list that his argument in The Conscious Mind that zombies are conceivable is logically equivalent to denying that physicalism is conceivable, so it's no argument against physicalism ... he said he used the argument to till the soil to make people more susceptible to his later arguments against physicalism (which I consider unethical)--all of which are bogus, like the Knowledge Argument--even Frank Jackson who originated it admits this.
Similarly, Robert Kirk, who coined the phrase "philosophical zombie" in 1974, wrote his book Zombies and Consciousness "as penance", he told me when he signed my copy.
> I don't want to do the thing where we fight on the internet.
Nor me ... I've had these "fights" too many times already and I know how they go, and I understand why people believe what they believe and why they can't be swayed, so I won't comment further ... I just want to put a dent in this "I'm a philosopher" argumentum ad verecundiam.
I’m less open to push back against philosophical zombies, as the argument seems trivially plausible, from a position of solipsism.