We've built it for apps that use LLMs and other ML models. The lightweight Python agent autoinstruments OpenAI, LangChain, Banana, and other APIs and frameworks. Basically by adding one line of code you'll be able to monitor and analyze latency, errors, compute and costs. Profiling using CProfile, PyTorch Kineto or Yappi can be enabled if code-level statistics are necessary.
Here is a short demo screencast for a LangChain/OpenAI app: https://www.loom.com/share/17ba8aff32b74d74b7ba7f5357ed9250
In terms of data privacy, we only send metadata and statistics to https://graphsignal.com. So no raw data, such as prompts or images leave your app.
We'd love to hear your feedback or ideas!
No comments yet.