1
Ask HN: What's the practical use of larger context LLMs?
I see that lots of folks are working on building LLMs that can handle more context without breaking the bank on GPU.
Is there a real practical reason for that right now or is it just something that everybody agrees is obvious without economic justification?