For example, mapping embeddings of Llama to GPT-3?
That way you can see how similar the models “understand the world”.
>download all my tweets (about 20k) and build a semantic searcher on top ?
How can utilize 3rd party embeddings with OpenAI's LLM API? Am I correct to understand from this article that this is possible?
My apologies if I am completely mangling the vocabulary here - I have an, at best, rudimentary understanding of this stuff that I am trying to hack my education on.
Edit: If you're at the SF meetup tomorrow, I'd happily buy you a beverage in return for this explanation :)
https://github.com/mayooear/gpt4-pdf-chatbot-langchain for example
No code needed :)
I'll have to dig out the notebook that I created for this, but I'll try to post it here once I find it.
The reasons the article listed, namely a) lock-in and b) cost, have given me pause with embedding our whole corpus of data. I'd much rather use an open model but don't have much experience in evaluating these embedding models and search performance - still very new to me.
Like what you did with ada-002 vs Instruct XL, has there been any papers or prior work done evaluating the different embedding models?
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
(Although there is a lot more advantages to just having Office 2021 like the flat fee)