I wonder if one needs even LlamaIndex?
From their site:
>Storing context in an easy-to-access format for prompt insertion.
>Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when context is too big.
>Dealing with text splitting.
Not sure if it isn't easier to roll one's own for that...?
I know a thing or two about the math behind LLMs and all this software build around a few core ideas just seems to be a lot of overkill...
When mentioning about PGVevtor, did you refer to this repo or is there a class within LangChain that has the same name? https://github.com/pgvector/pgvector