Literally instead of a table.
This stuff is everywhere. Microservices made it worse and half legitimized it.
There's a service that watches the transaction log of your regional replica so that you can make long-poll HTTP requests that return when any change matching your filter is committed. (Edit: usually the HTTP result handler is used to invalidate specific memoized results in the data flow graph, letting lazy re-evaluation re-fetch the database records as needed.)
It makes a lot of sense for a financial risk system, where you end up calculating millions of slight variations on a scenario. The data flow model with aggressive memoization makes this sort of thing much cheaper.
However, I saw plenty of systems written where you'd attempt to write your request to the next key matching some regex (and retry with the next key if it already existed), where your request would contain some parameters and the database key and/or filesystem path where results should be written.
Under-experience with databases easily results in rewriting a database using message queue/bus. Under-experience with message queues/busses easily results in rewriting a message queue/bus using a database.
Database are still fairly poorly documented when it comes to administrative work.
There is an incredible amount of tutorial, books and courses on how to write sql queries and stuff... But there is almost zero content on how to properly administer a database.
I mean, from novice admin to DBA-level capabilities.
I said all this before and i'm ready to write this again: i think there's a good market space for dba-style courses.
The most unpleasant codebases I've dealt with are ones that have suffered from a lack of strong leadership, and they are almost uniquely microservice setups that pull in everything but the kitchen sink, usually because it's just trendy to use it. Monoliths can get pretty damn ugly too but at least it's contained in one single codebase.
It went like:
App -> DynamoDB -> Kafka Connect Sink Process -> RDS -> Kafka
The reason for all the middle processes were because teams couldn't agree how to structure their data and the first app would dump literal nonsense sometimes so the Kafka connect process's job was to clean it and dump any of the nonsense they pumped into it. Pretty sure there was a gnarly log aggregation layer in the middle somewhere too IIRC.