I think the way that Ilya suggests that the “test for consciousness is to train a model with an absolute absence of any training example remotely referring to the notion of a self or of feeling, and then ask it questions about feeling. If the model can do it, congrats, you’ve discovered consciousness.” Similarly, if you train an architecture on exclusively the building blocks of a particular class of problem, and also avoid training it on any sort of problem where it could just reason by analogy and get a correct answer (isolating first principles thing as the only option), then if it can solve the problem you have a genuine problem solving architecture.