> OK, but then how do you perform ad hoc queries on everything you logged to Kafka when it's time to debug an issue?
Again I'd say treat it like data you care about. Use your best guess at a primary identifier as the record key, depending on your data volume do some indexing/pre-aggregation around other facets that you think you might want to query on (which might include materialising everything in ksqldb, or even in some other datastore), and accept that occasionally you're going to have to do a slow full scan.
> There are plenty of well known, battle tested solutions for solving that problem with old school logging.
Splunk was just bought for $28B because none of those "well known, battle tested solutions" are any good. (Splunk also sucks! It just sucks a little less than the other options).