So far my usage has been minuscule in comparison to the PS limits. That said, I had been hoping for better tools to identify "problem queries" early instead of just when the billing cycle comes up and so I'm happy to see work in that direction.
Just a random anecdote about PlanetScale:
A few weeks ago I realized their databases did not have the timezone database loaded into them and it wasn't something I could do myself. I needed this so I could do `CONVERT_TZ` to convert from UTC to the user's TZ (for report aggregation). I reached out to support and in about a week they had added it to their roadmap, shipped it, and turned on the new feature for my DBs. They have been a joy to work with so far and I encourage you give them a shot, especially if you are on Aurora Serverless (V1 or V2).
I see the documentation and graphs, but can't find if PlanetScale provides a query interface into this data.
Single user systems do okay with reports like the ones I see in the link, where you can actually go in and drill-down into specific details in there.
It would much more awesome for the DBA style user if the reporting data was actually just loaded into another database/table with join schemas to query the data exactly as the report did. That of course, assumes the DBA is consultant type who is dropped in to fix cost overruns rather than the app developer going over their queries again.
In my last job, I built something of this sort (Hive has a protobuf SerDe + a special table named sys.query_data), so that I could connect up a CDSW Jupyter notebook and narrow down queries with a python program + a loop.
Of course, the queries themselves were also customer-paid queries, but it was much more flexible + a bunch of canned reports did most of the work when moving it across customers.
But before it was baked-in into Hive/CDW, it was actually a syslog parser which fed into a sqlite db which is almost exactly the same (but mostly intended at solving txn locking/conflict checking across hundreds of queries touching the same informatica audit log table).
this is fourth day in a row of planetscale ads^H^H^H blog posts being on hn front page. as i mentioned on yesterdays thread, innodb_rows_read is known to be buggy. regardless, by design it includes cached rows. terrible thing to base billing on. real cloud providers base it on i/o instead since this is more reasonable metric of "use"
planetscale's fork of mysql-server adds only a single commit, which exposes rows_read in an extra place. this from company that keeps talking about "building a database" https://github.com/planetscale/mysql-server
The trickier part is orchestrating the ongoing management of that across a large dynamic fleet. And in this case, it was much more than simply loading the tables but about using them to support importing databases into PlanetScale: https://github.com/vitessio/vitess/pull/10102
I'll link to my other comment on the billing issue: https://news.ycombinator.com/item?id=31509240
We've had to do some other changes to our MySQL fork as well that will show up there, but we'd love to not have any patches! We'd love to keep the patch set minimal (just as Amazon certainly does with RDS and Aurora). And I would certainly argue that Vitess, which is what we build PlanetScale around, is a meaningful piece of technology that pairs with MySQL to make a great database: https://vitess.io. You're of course free to disagree — and I wish you all the best as you work to build something great in the future.
https://aws.amazon.com/dynamodb/pricing/
Their team is awesome, I requested a couple features in the CLI and they were there within a few hours. Support is responsive and the sales team was super helpful getting everything running and migrated.