Hard disagree, Kafka is one of the simplest lowest maintenance tools for this with excellent language support and would probably be the first choice for anyone not paying $cloud_vendor for a managed durable queue.
The first step in building a reliable logging system is setting up a high write throughout highly available FIFOish durable storage. Once you have that everything else gets a lot easier.
* Once the log is committed to the durable queue that's it the application can move on secure the log isn't going to get lost.
* Multiple consumer groups can process the logs for different purposes, the usuals are one group for persisting the logs to a searchable index and one group for real time altering.
* Everything downstream from Kafka can be far less reliable because it's just a queue backup.
* You can fake more throughout then you actually have in your downstream processors because it just manifests as a lagging offset.