The only thing I can actually suggest is to avoid the hell out of logging if you can. It’s a really expensive concern and should be treated so from day one. If your system has nothing actionable to log them don’t log it.
At a high level from observations of trying to handle 100Gb a day.
Cloudwatch is inflexible and expensive.
ELK is expensive to run and administer. The commercial variants are even worse.
Splunk is expensive and slow.
Datadog is expensive.
Loki is expensive to run and administer.
ryslogd and grep starts to feel like a viable solution eventually. Then you realise that you need about 50MB of that 100G a day of logs and enlightenment comes.