I did really like Kibana, ultimately, we had to ditch it (because of ditching ES). Of course, this was a good thing, as I more than once degraded ingest the ES cluster by just using Kibana to do some aggressive filtering. Clickhouse handles these without problem.
I think a more complete world view may be to pipe logs into kafka, and ingest them into Clickhouse/Druid for different types of analysis/rollups.
Our current logging volume exceeds ~10b log lines per day now. Clickhouse handles this ingest almost too well (we have 3 16 core nodes that sit at 5% CPU). This is down from a... 20ish node ES cluster that basically set pegged on CPU... and our log volume then was ~1b/day.
For more ad-hoc, we just use the clickhouse-cli to query the dataset directly. We are tangentially investigating using superset with it.