If you want to change the axioms to better reflect some aspect about life, that's all well and good, but everything will still fall out of the new axioms.
In this sense mathematicians are board-game designers. It matters less how well the game describes nature's reality than how fun it is to play the game that results.
Now if you were a physicist, the game has already been design by some other mechanism and you have to probe to understand the rules and discover its consequences.
That doesn't make for a perfect field, or even a good field, for machine learning to thrive in; what we care about is finding useful results. Starting with arbitrary axioms is a good way to prevent that from happening.
Compare this discussion from an algebra textbook I've been reading recently:
-----
The possibility of combining two elements of A(S) to get yet another element of A(S) endows A(S) with an algebraic structure. We recall how this was done: If f, g ∈ A(S), then we combine them to form the mapping fg []. We called fg the product of f and g, and verified that fg ∈ A(S), and that this product obeyed certain rules.
From the myriad of possibilities we somehow selected four particular rules that govern the behavior of A(S) relative to this product.
[...]
To justify or motivate why these four specific attributes of A(S) were singled out, in contradistinction to some other set of properties, is not easy to do. In fact, in the history of the subject it took quite some time to recognize that these four properties played the key role. We have the advantage of historical hindsight, and with this hindsight we choose them not only to study A(S), but also as the chief guidelines for abstracting to a much wider context.
-----
It takes work, a lot of work, to determine what axioms you should use. Where do you think the information necessary to make that determination comes from?
You're talking about an inductive framework in which we create deductive frameworks that model the world we live in as best as we can tell. This is very, very hard work. The flipping back and forth between the inductive framework and the deductive framework -- between modeling reality and discovering and testing new aspects -- is the heart of what knowledge is, but again, this is the dance between the two frameworks.
I'm just talking about that deductive framework, that exists, locked in it's by definition arbitrary axioms. It's in that framework that machine learning should thrive, because all of the propositions possible will fall out of the axioms.
You like to repeat yourself. Are there other words you can use to describe what you're supposedly thinking? What does it mean for machine learning to thrive in a space?
In the same way that you can derive all valid proofs from a set of axioms, you can derive all valid 320x240 bitmaps from a (much simpler!) set of axioms. Does this mean that artwork is "a perfect field for machine learning to thrive [in]"? What would be an example of a field that isn't "perfect for machine learning to thrive in"?