In a nutshell, part of your llm prompt (usually your most recent question?) gets fed as a query for the embedding/vector database. It retrieves the most "similar" entries to your question (which is what an embedding database does), and that information is pasted into the context of the llm. Its kinda like pasting the first entry from a local Google search into the beginning of your question as "background."
Some implementations insert your old conversations (that are too big to fit into the llm's context window) into the database as they are pushed out.
This is what I have seen, anyway. Maybe some other implementations do things better.
How is it embedded? Using a separere embedding model, like Bert or something? Or do you use the LLM itself somehow? Also, how do you create content for the vector database keys themselves? Also just some arbitrary off the shelf embedding? Or do you train it as part of training the LLM?
You do not need a fancy cloud hosted service to use an embeddings database like you do not need one to use a regular databse (although you could).
Check https://github.com/kagisearch/vectordb for a simple implementation of a vector search database that uses local, on-premise open source tools and lets you use an embeddings database in 3 lines of code.