Two lenses for trying to understand this are potentially Chastain's limits on output of a lisp program being more complex than the program itself [1] or Markov's proof that you can't classify manifolds in d>= 4.
If you try the latter and need/want to figure out how the Russian school is so different this is helpful [2]
IMHO the former gives an intuition why, and the latter explains why IMHO.
In ZFC, C actually ends up implying PEM, which is why using constructionism as a form of reverse math helped it click for me .
This is because in the presence of excluded middle, every sequentially complete metric space is a complete space, and we tend to care about useful things, but for me just how huge the search space grows was hidden due to the typical (and useful) a priori assumption of PEM.
If you have a (in my view) dislike for the constrictive approach or don't want/have to invest in learning an obscure school of it, This recent paper[3] on the limits for finding a quantum theory of everything is another lens.
Yet another path is through Type 2 TMs and the Borel hierarchy, where while you can have a uncomputable number on the input tape you algorithms themselves cannot use them, while you can produce uncomputable numbers by randomly selecting and/or changing an infinite sequence.
Really it is the difference between expressability and algorithms working within what you can express.
Hopefully someone else can provide more accessible resources. I think a partial understanding of the limits of algorithms and computation will become more important in this new era.
[1] https://arxiv.org/abs/chao-dyn/9407003 [2] https://arxiv.org/abs/1804.05495 [3] https://arxiv.org/abs/2505.11773