We’ve built a LLM microservice that answers questions about a corpus of documents, while automatically reacting to additions of new docs. The single, self-contained service fully replaces a complex multi-system pipeline that scans in real-time for new documents, indexes them into a specialized database and queries it to generate answers. Everyone can have their own real-time vector now.
Github: https://github.com/pathwaycom/llm-app Demo video: https://youtu.be/kcrJSk00duw
I am eager to hear your thoughts and comments!