Also, in light of this weekend's madness, are you reconsidering using the OpenAI API as the canonical interface? Do you think there could be changes in the future?
a) This is hosted b) Supports caching and load balancing c) Can manage multiple providers behind a single API key d) Implemented in Typescript (vs. Python)
On the other hand, LiteLLM is a more mature project and supports significantly more model providers than we currently do!
Feel free to ping me at ankur@braintrustdata.com and we can chat more!
We will likely work with or build some routing capabilities in the future!