All of the points made here are also true for Mastra, for example.
> One pain point has been documentation. The framework is developing very quickly and the docs are sometimes incomplete or out of date
I also found this to be the case when working with Microsoft's Semantic Kernel in the early days. Thankfully, they had a lot of examples and integration tests demonstrating usage.Where's the AI startup using LLMs to automatically generate docs, sample code, and guides for libraries?
> I think it's fair to say that "roll-your-own" would probably make less sense
It is not particularly difficult to implement the use cases that the article outlines on top of existing lower level SDKs. Yes, I'm aware that some of these platforms offer a lot more capabilities than just DAG flow of prompts, but article's use case can be implemented in less than a day, TBH (from experience doing it twice now (not my choice; other decision makers are hesitant to adopt existing libs and prefer lower level libs...))I just think the article would be better if it actually answered "Why LangGraph and not these?"
BUt the pricing model and deployment story felt odd. The business model around LangGraph reminded us of Next.js/Vercel, with a solid vendor lock-in and every cent squeezed out of the solution. The lack of clarity on that front made us go with Pydantic AI.
This is by far the most frustrating part of building with LLMs. Is there any good solution out there for any framework?