The nice thing about PlanetScale is you get nearly unlimited connections (soft limit of like 250K IIRC) so making 1 connection per active lambda isn't a problem at all.
I've been using PlanetScale since shortly after they went GA and I've been very happy so far. Cheaper than Aurora Serverless, less hassle, and the branching feature is super cool. Zero-downtime deploys, with rollback support, feel magical.
This specifically is targeting environments where a MySQL client isn't able to run.
Oh, absolutely and I didn't mean to imply it wasn't useful, I was just saying I went down the Prisma path and will be sticking with that. Even so I'm glad this exists for times that I don't need the full weight/power of Prisma but do want to talk to a PlanetScale DB.
How does this work?
[0] https://planetscale.com/docs/concepts/deploy-requests
[1] https://planetscale.com/blog/its-fine-rewind-revert-a-migrat...
import { connect } from '@planetscale/database'
const config = { host: '<host>', username: '<user>', password: '<password>' }
The idea is these runtimes typically only allow HTTP outbound requests, not whatever protocol is typically used to connect to a planetscale db.
[1] https://planetscale.com/docs/learn/operating-without-foreign...
I used Aurora Data API before moving off Aurora Serverless (insane pricing) to Prisma and PlanetScale, I don't think I'd go back to the HTTP API as Prisma works very well and I enjoy using it. One downside to Prisma is the DB "engine" is a arch-specific (obviously) binary that is pretty hefty. I want to say ~50MB. That can be a killer in serverless but I was able to work around it without much issue. If I ever wanted to dive into the world of Lambda layers I could probably move it into it's own later (however that still counts towards your max lambda size).
Yeah, I think this new thing would be less useful for Lambda as that does support TCP connections. However a HTTP API is required for e.g Cloudflare Workers where you can't create a normal MySQL client. I think that's where this could shine.
But the underlying tech is exactly the same as we use for handling traditional MySQL connections, so there isn't anything to fear.
if so and you used it, what was you experience running fauna in CF workers?
It seems like you lose most of the benefit of your code running at the edge if your database is still in an AWS region.
I plan on following up with a more technical deep dive on some of these aspects, but it's quite a bit hard to do a 1:1 comparison.
While, obviously, HTTP also uses TCP, I'm going to assume you're asking more about the binary MySQL protocol vs HTTP.
On the surface, yes, HTTP is going to have more overhead with headers and the other aspects that come with HTTP. Also, in this case, the payload is JSON, both on the request and response, which is going to be slightly more bulky than the payloads over the MySQL protocol.
So where things get interesting is the real world performance implications. Networks and CPUs are really fast. HTTP has been extremely scrutinized and JSON as well. While they are admittedly not the most efficient standards, they are used so heavily and parsers are heavily optimized for these protocols.
Mixing in modern TLS 1.3 with modern ciphers, which is something you're very unlikely to get with a traditional MySQL client, we can achieve a much faster first connection time. Pairing with modern ciphers that are demanded for TLS 1.3, and the transport itself can be significantly faster than a slower cipher. This isn't the best comparison since typically you're using MySQL without TLS, but when talking to a service provider like us, we require it for obvious reasons.
Next, with HTTP, we get to leverage compression. In our case, we can use browser level native compression with gzip, or if using something server side, we support compressions like snappy. Combine something like gzip or snappy with HTTP/2 now, and given that the bulk of a query result is typically the result itself, and not the protocol, compression can make up a decent amount of the difference. Again, I'm being hand wavy since every query pattern and results are going to be wildly different based on your data, so it's not anything fair to say "this is x% more or less efficient."
And lastly, with HTTP, especially HTTP/2, we can multiplex connections. So similar to the gains with TLS 1.3, you can do many many concurrent database sessions across 1 actual TCP connection with HTTP/2. And similarly to TLS 1.3, the cost here is in connection timings, and management of a connection pool. You don't need a pool of connections on the client, so your startup times can be reduced.
As a stretch goal, HTTP/3 (with QUIC) is in the crosshairs, which should eliminate some more transport cost since it uses UDP rather than TCP. My hunch is all of this combined together, an optimized driver leveraging HTTP/3 might beat out a MySQL client overall. Time will tell though!
So there's no simple answer here, everything has tradeoffs. :)
A lot of this for me/us at this point is still theoretical since we don't have tons of application yet. Serverless was an easy first target since they require HTTP as the transport. The real experiments will come when we say, put together a Python or Go driver based on these APIs and compare side by side to a native MySQL driver.
That's awesome, I was assuming a sizable overhead (even if it is for a sizable benefit). But I can see how lots of factors contribute to closing the gap, and HTTP/3 might even be a negative overhead since it almost cuts out a protocol layer by stepping down to UDP.
Thanks again!
So we work there just fine.