The number of arguments I've had about "AI" with friends has me facepalming regularly. Understanding why LLMs don't equate to "intelligence" is a direct result of that training. Still admitting that AGI might actually be an algorithm we haven't figured out yet is also a direct result of that training.
Most deep philosophical issue come from axiom consensus (and the lack there of), the reflexive nature between deductive and inductive reasoning, and conceptions of Knowledge and Truth itself.
It's pretty rare that these are pragmatic problems, but occasionally they are relevant.