I think this is the real problem. If it’s unsourced, how can I verify the LLM isn’t hallucinating. That being said I started running open web ui to host models locally and have heard that some will source their content(I don’t know which, I haven’t hosted them yet) so that is promising. I also like hosting deep seek locally and being able to review its logic process so I can assess how it arrived at its conclusions. All that to say, I still use a traditional search (self hosted version of searxng) for 95% of my search. I like llms for bouncing ideas around, but not for finding accurate results quickly