It should not take engineering time to have a database compute full-text indices. In sensible systems, you do "CREATE INDEX" and done.
To search multiple TBs of logs, you need a single 40 $/month server containing an 8 TB SSD running sensible software/index algorithm.
I agree that ElasticSearch is bloated and needs undue engineering time. But it doesn't need to be that way.
For example Quickwit finds things subsecond.
It's a huge improvement when queries go from 10 minutes linear search to instant.
(Its index is still not perfect for me because it doesn't support fully simple exact prefix/infix search, but otherwise it does the job fast with few resources.)
> Full fuzzy text serch is probably overkill
Yes, I think most people don't need fuzzy search for log search. They just need indexed grep.
> I think grep is amazing but yes if you unleash it on 'all the logs' without narrowing yourself down to a time frame first or some other taxonomy is going to be slow. This seems like a skill issue, frankly.
Right, grep is not the tool for the job. It's neglecting all sensible algorithms that solve this problem. It's like saying "I don't use binary search, only linear search", and spend human effort to pre-select the range so that it's fast enough.
When you're searching for the rare bugs, you also can't just limit the the time frame.