at the enterprise scale on the backend you end up paying for a bunch of indexing you will likely never use. On top of that you spend a LOT of money in engineering hours setting up indexes for many teams all with different log formats so the whole thing doesn't just melt down.
On the kibana side, their query language is unshared by any other tool, at least any that I use, meaning that in the middle of an outage I end up chasing my tail reading docs on how to query what you want. The returns are often slow and it's very hard to just export the logs you do find to text files so you can ingest them into other tools.
I mean I came up on cat/gerp/awk/sed/less/tail/(more recently jq for json logs) .. it wasn't perfect but it was RESPONSIVE and portable.
I just think that tools like ES/Splunk weren't conceived for dealing with logs (especially if your logs come in many formats) and are both overkill and at the same time underkill for the task. It's like using a ball peen hammer to drive nails, you can certainly DO it, but a claw hammer is cheaper and a more ergonomic experience.