You have to define the problem to figure this out.
Me personally, I am a fan of multiple smaller db's that are purpose fit (regardless of application architecture). e.g. user/security, payments, logging, analytics, search etc.
Doing this lets me pick the best tool for the job and also generally keeps the data to an isolated area which makes backup/recovery easier and also allows you to apply tighter security standards around them as needed.
Another factor you have to consider, distributed databases change the backup/recovery strategy drastically as does data growth and size attributes.
As for technology, I generally stick with relational DB's (postgreSQL is my go-to) unless there is a specific problem to solve that I can do better/easier/faster with another tool. As an example, I might ship data (logging or otherwise) into elasticsearch for searching. I also have no issues using any of the DB's from AWS that are based on open source databases, but I don't generally use DynamoDB as an example (not that I wouldn't just haven't seen a good argument given other options and costs etc).
I also use S3 a lot for backups and for json stores. Personally, I have always wanted to play with using ES as my query tool with all the data stored in S3 itself. Wouldn't work for everything but seems pretty damn reliable and scalable. Even the index snapshots could be stored in S3 for recovery purposes if needed.