> For sure it will depend on use case, if you have fairly structured data or a clear domain-specific terminology to rely on
Indeed. This works only in a subset of business domains for me: search and assistants within enterprise knowledge base (e.g. ~40k documents with 20GB of text) within logistics, supply chain, legal, fintech and medtech.
> You might be able to quantify this and gain some insight into why query expansion/FTS is working better by comparing the precision/recall with a vector db using some set of benchmark docs and queries.
Embeddings tend to miss a lot of nuances, plus they are just unpredictable when searching on large sets of text (e.g. 40k documents fragmented), frequently pulling irrelevant texts before the relevant ones. Context contamination leads to hallucinations in our cases.
However with LLM-driven query expansion and FTS search I can get controllable retrieval quality in business tasks. Plus, if something edge case shows up, it is fairly easy to explain and adjust the query expansion logic to cover specific nuances.
This is the setup I'm happy with.