You can criticize LLMs all you want, but the fact is that they provide value to people in ways that alternatives simply don’t. The energy consumption is a concern, but don’t pretend there are viable alternatives when there aren’t.
LLMs store embeddings of individual tokens (usually parts of words), so a result of an actual search will be top-k embeddings and the corresponding tokens, similar to the output of a Google search. You could extract the initial matrix of embeddings from some open-weights model and find tokens closest to your query. However, it's not clear why do this. OP got coherent text, so that's not search.
It's _similar_, though, because attention in LLMs basically looks for most similar tokens. So to answer the question about the term, the LLM had to create a stream of tokens that's semantically closest to the given description. Well, this is somewhat like a search, but it's not exactly the same.