I googled whether or not universal function approximators, which neural nets are considered, are also considered Turing complete. It seems the general consensus is kind of not, since they are continuous and can’t do discreet operations in the same way.
But also, that isn’t quite the whole story, since they can be arbitrarily precise in their approximation. Here[0] is a white paper addressing this issue which concludes attention networks are Turing complete.
0: https://jmlr.org/papers/volume22/20-302/20-302.pdf