I’m building a non trivial AI app and the validation and dependency injection is such a great addition compared to using the LLM libraries directly.
Have you tried it?
This framework looks really well designed, I'm going to take it for a spin.
from the above link, which seems to use FSMs instead of DAGs.
I’m interested in a useful agentic framework but LangGraph doesn’t seem to cut it.
The use case where they are helpful is “bring your own keys” apps. I maintain https://github.com/kiln-ai/kiln which allows you to bring keys for 13 different providers. The abstraction is very much worth it for me.
That said:
- I migrated from LangChain to LiteLLM and never looked back
- I have over 1000 automated integration tests that check the grid of LLM features (tools, json), model, and provider. Without them it would still be a mess.
So if you have an authoring workflow where a doc goes through a bunch of steps, and at some steps the analyst might want to fix some LLM output manually, and try a couple of things and then go back to the way it was before and try again, it will do that and you won't have to make your own state machine.
Not to mention most of those frameworks were almost vibecoded.
They offer very little on top of what you could do it yourself
But of course some people think "if it exists we need to use it"
You can have a subclass of your Node class be an AgentNode class and then subclass that for each type of Agent and then when you declare your Graph object you pass in the data to instantiate the AgentNode with the type of data it needs. It is a bit weird that LangGraph doesn't have a default Node class but it sort of makes sense that they want you to write it in a way that makes sense for how you use it.
I do highly recommend abstracting your graph into Node and Edge classes (with appropriate subclasses) and being able to declare your graph in a constant that you can pass to a build_graph method. Getting as much code reuse as possible dramatically simplifies debugging graph issuses.