One of the things I'm using LLMs for a lot right now is quickly generating answers about larger codebases I'm completely unfamiliar with.
Anything up to 250,000 tokens I pipe into GPT-5 (prior to that o3), and beyond that I'll send them to Gemini 2.5 Pro.
For even larger code than that I'll fire up Codex CLI or Claude Code and let them grep their way to an answer.
This stuff has gotten good enough now that I no longer get stuck when new tools lack decent documentation - I'll pipe in just the source code (filtered for .go or .rs or .c files or whatever) and generate comprehensive documentation for myself from scratch.