I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:
1. User types in query
2. Search returns hits
3. User selects a hit
4. User looks for information in hit
5. User has information
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?