Normally, I'm working in a small part of the codebase and I can give really concrete instructions and the right context to Cursor. Here, it works really well.
But, when I ask it questions are are more generic (where's the code that does X/how can I do X?/Implement <functionality> by using X and Y), it often hallucinates or gives me wrong answers.
I can see that it tries to find the right context to send to the LLM. Sometimes, it does find the right context, other times it doesn't. Even when it does find the right context, I'm guessing it's just too much context that gets sent to the LLM so it just ends up hallucinating.
Have you had this same problem with whichever AI coding tool you use? I'm wondering if the problem is specific to the legacy + large codebase I'm working with or is it a more general thing with unseen code that the LLM hasn't seen in it's training data.