Yes. I think LangChain fills in a lot of the problems and it becomes more like programming. So instead of having it “reason” about the results and plans just code those in and sometimes call a LLM when it makes sense.
Right now it’s extremely chaotic since there’s no human correction in the process so the errors compound until you quickly reach incoherence