In case you don’t recall, Burr is an open-source python library that makes it easier to build and debug GenAI applications & agents by representing them as graphs of simple python objects/functions. Burr only abstracts away system-level concerns (state persistence, debugging, observability), and does not dictate the way you interact with LLMs. Burr comes with a host of capabilities including an open-source UI for monitoring and observing. Burr competes with (and complements) libraries such as Haystack and LangGraph, differentiating with a focus on simpler graph state and observability constructs. We value clarity and customization over terseness (we do not have a graduation problem).
You can find the repository here: https://github.com/dagworks-inc/burr.
We are really excited about the following new features:
- Recursive, Parallel Agents: Model multi-agent hierarchies and track directly in the UI
- UI Annotations: Mark production data to review and gather post-execution evaluation/test datasets
- OpenTelemetry Integration: Log to OpenTelemetry and ingest OTel in the Burr UI to improve and customize visibility
- Reloading, Time Travel, and Forking: Debug by reloading any point in the execution history to replay and fix issues.
- Production-Ready Monitoring: Deploy with a simple self-hosted S3-based system.
Since releasing, people are building & successfully shipping: concierge agents for slack, voice answer agents for restaurants, agents over RAG systems, co-pilots for internal business workflows, to name a few. On top of this we have an exciting set of blog posts, writeups, and user testimony – we’ll be sharing this + more links to get started in a comment below!