Labs have been doing that since Brooks' Subsumption Architecture decades ago. The problem with AI now is that the architectural design, unlike the brain, doesn't have grounded memory and hallucination mitigation. Letting those architectures walk around in the real world would show similar flaws.
Multiple teams already baked memory into designs, some like typical ML and some biologically inspired. Hallucination mitigation needs a ton more research. My proposal was studying the part of the brain that causes hallucinations when damaged in case it's designed to mitigate them. Then, imitate it until we have something better.